00:00:00.000 Started by upstream project "autotest-nightly-lts" build number 1717 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 2978 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.127 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.127 The recommended git tool is: git 00:00:00.128 using credential 00000000-0000-0000-0000-000000000002 00:00:00.129 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.173 Fetching changes from the remote Git repository 00:00:00.174 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.224 Using shallow fetch with depth 1 00:00:00.225 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.225 > git --version # timeout=10 00:00:00.257 > git --version # 'git version 2.39.2' 00:00:00.257 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.258 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.258 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.211 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.222 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.232 Checking out Revision 9a23290da272374f14acecb1f0954a7f78afc3cb (FETCH_HEAD) 00:00:07.232 > git config core.sparsecheckout # timeout=10 00:00:07.241 > git read-tree -mu HEAD # timeout=10 00:00:07.256 > git checkout -f 9a23290da272374f14acecb1f0954a7f78afc3cb # timeout=5 00:00:07.272 Commit message: "jenkins/perf: add artifacts cleanup for spdk files" 00:00:07.272 > git rev-list --no-walk 9a23290da272374f14acecb1f0954a7f78afc3cb # timeout=10 00:00:07.350 [Pipeline] Start of Pipeline 00:00:07.364 [Pipeline] library 00:00:07.365 Loading library shm_lib@master 00:00:07.366 Library shm_lib@master is cached. Copying from home. 00:00:07.383 [Pipeline] node 00:00:22.399 Still waiting to schedule task 00:00:22.400 Waiting for next available executor on ‘vagrant-vm-host’ 00:21:48.464 Running on VM-host-WFP7 in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:21:48.466 [Pipeline] { 00:21:48.479 [Pipeline] catchError 00:21:48.480 [Pipeline] { 00:21:48.496 [Pipeline] wrap 00:21:48.507 [Pipeline] { 00:21:48.514 [Pipeline] stage 00:21:48.515 [Pipeline] { (Prologue) 00:21:48.532 [Pipeline] echo 00:21:48.533 Node: VM-host-WFP7 00:21:48.538 [Pipeline] cleanWs 00:21:48.551 [WS-CLEANUP] Deleting project workspace... 00:21:48.551 [WS-CLEANUP] Deferred wipeout is used... 00:21:48.576 [WS-CLEANUP] done 00:21:48.743 [Pipeline] setCustomBuildProperty 00:21:48.816 [Pipeline] nodesByLabel 00:21:48.818 Found a total of 1 nodes with the 'sorcerer' label 00:21:48.826 [Pipeline] httpRequest 00:21:48.830 HttpMethod: GET 00:21:48.831 URL: http://10.211.164.101/packages/jbp_9a23290da272374f14acecb1f0954a7f78afc3cb.tar.gz 00:21:48.833 Sending request to url: http://10.211.164.101/packages/jbp_9a23290da272374f14acecb1f0954a7f78afc3cb.tar.gz 00:21:48.833 Response Code: HTTP/1.1 200 OK 00:21:48.834 Success: Status code 200 is in the accepted range: 200,404 00:21:48.835 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp_9a23290da272374f14acecb1f0954a7f78afc3cb.tar.gz 00:21:48.973 [Pipeline] sh 00:21:49.255 + tar --no-same-owner -xf jbp_9a23290da272374f14acecb1f0954a7f78afc3cb.tar.gz 00:21:49.283 [Pipeline] httpRequest 00:21:49.287 HttpMethod: GET 00:21:49.288 URL: http://10.211.164.101/packages/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:21:49.288 Sending request to url: http://10.211.164.101/packages/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:21:49.289 Response Code: HTTP/1.1 200 OK 00:21:49.289 Success: Status code 200 is in the accepted range: 200,404 00:21:49.290 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:21:51.452 [Pipeline] sh 00:21:51.735 + tar --no-same-owner -xf spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:21:55.035 [Pipeline] sh 00:21:55.318 + git -C spdk log --oneline -n5 00:21:55.318 36faa8c31 bdev/nvme: Fix the case that namespace was removed during reset 00:21:55.318 e2cb5a5ee bdev/nvme: Factor out nvme_ns active/inactive check into a helper function 00:21:55.318 4b134b4ab bdev/nvme: Delay callbacks when the next operation is a failover 00:21:55.318 d2ea4ecb1 llvm/vfio: Suppress checking leaks for `spdk_nvme_ctrlr_alloc_io_qpair` 00:21:55.318 3b33f4333 test/nvme/cuse: Fix typo 00:21:55.337 [Pipeline] writeFile 00:21:55.353 [Pipeline] sh 00:21:55.636 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:21:55.649 [Pipeline] sh 00:21:55.932 + cat autorun-spdk.conf 00:21:55.932 SPDK_RUN_FUNCTIONAL_TEST=1 00:21:55.932 SPDK_TEST_NVMF=1 00:21:55.932 SPDK_TEST_NVMF_TRANSPORT=tcp 00:21:55.932 SPDK_TEST_VFIOUSER=1 00:21:55.932 SPDK_TEST_USDT=1 00:21:55.932 SPDK_RUN_UBSAN=1 00:21:55.932 SPDK_TEST_NVMF_MDNS=1 00:21:55.932 NET_TYPE=virt 00:21:55.932 SPDK_JSONRPC_GO_CLIENT=1 00:21:55.932 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:21:55.939 RUN_NIGHTLY=1 00:21:55.941 [Pipeline] } 00:21:55.956 [Pipeline] // stage 00:21:55.970 [Pipeline] stage 00:21:55.972 [Pipeline] { (Run VM) 00:21:55.985 [Pipeline] sh 00:21:56.268 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:21:56.268 + echo 'Start stage prepare_nvme.sh' 00:21:56.268 Start stage prepare_nvme.sh 00:21:56.268 + [[ -n 5 ]] 00:21:56.268 + disk_prefix=ex5 00:21:56.268 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 ]] 00:21:56.268 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf ]] 00:21:56.268 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf 00:21:56.268 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:21:56.268 ++ SPDK_TEST_NVMF=1 00:21:56.268 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:21:56.268 ++ SPDK_TEST_VFIOUSER=1 00:21:56.268 ++ SPDK_TEST_USDT=1 00:21:56.268 ++ SPDK_RUN_UBSAN=1 00:21:56.268 ++ SPDK_TEST_NVMF_MDNS=1 00:21:56.268 ++ NET_TYPE=virt 00:21:56.268 ++ SPDK_JSONRPC_GO_CLIENT=1 00:21:56.268 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:21:56.268 ++ RUN_NIGHTLY=1 00:21:56.268 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:21:56.268 + nvme_files=() 00:21:56.268 + declare -A nvme_files 00:21:56.268 + backend_dir=/var/lib/libvirt/images/backends 00:21:56.268 + nvme_files['nvme.img']=5G 00:21:56.268 + nvme_files['nvme-cmb.img']=5G 00:21:56.268 + nvme_files['nvme-multi0.img']=4G 00:21:56.268 + nvme_files['nvme-multi1.img']=4G 00:21:56.268 + nvme_files['nvme-multi2.img']=4G 00:21:56.268 + nvme_files['nvme-openstack.img']=8G 00:21:56.268 + nvme_files['nvme-zns.img']=5G 00:21:56.268 + (( SPDK_TEST_NVME_PMR == 1 )) 00:21:56.268 + (( SPDK_TEST_FTL == 1 )) 00:21:56.269 + (( SPDK_TEST_NVME_FDP == 1 )) 00:21:56.269 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:21:56.269 + for nvme in "${!nvme_files[@]}" 00:21:56.269 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:21:56.269 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:21:56.269 + for nvme in "${!nvme_files[@]}" 00:21:56.269 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:21:56.269 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:21:56.269 + for nvme in "${!nvme_files[@]}" 00:21:56.269 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:21:56.269 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:21:56.269 + for nvme in "${!nvme_files[@]}" 00:21:56.269 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:21:56.269 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:21:56.269 + for nvme in "${!nvme_files[@]}" 00:21:56.269 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:21:56.269 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:21:56.269 + for nvme in "${!nvme_files[@]}" 00:21:56.269 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:21:56.269 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:21:56.540 + for nvme in "${!nvme_files[@]}" 00:21:56.540 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:21:57.497 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:21:57.497 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:21:57.497 + echo 'End stage prepare_nvme.sh' 00:21:57.497 End stage prepare_nvme.sh 00:21:57.510 [Pipeline] sh 00:21:57.794 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:21:57.794 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora38 00:21:57.794 00:21:57.794 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant 00:21:57.794 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk 00:21:57.794 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:21:57.794 HELP=0 00:21:57.794 DRY_RUN=0 00:21:57.794 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:21:57.794 NVME_DISKS_TYPE=nvme,nvme, 00:21:57.794 NVME_AUTO_CREATE=0 00:21:57.794 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:21:57.794 NVME_CMB=,, 00:21:57.794 NVME_PMR=,, 00:21:57.794 NVME_ZNS=,, 00:21:57.794 NVME_MS=,, 00:21:57.794 NVME_FDP=,, 00:21:57.794 SPDK_VAGRANT_DISTRO=fedora38 00:21:57.794 SPDK_VAGRANT_VMCPU=10 00:21:57.794 SPDK_VAGRANT_VMRAM=12288 00:21:57.794 SPDK_VAGRANT_PROVIDER=libvirt 00:21:57.794 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:21:57.794 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:21:57.794 SPDK_OPENSTACK_NETWORK=0 00:21:57.794 VAGRANT_PACKAGE_BOX=0 00:21:57.794 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:21:57.794 FORCE_DISTRO=true 00:21:57.794 VAGRANT_BOX_VERSION= 00:21:57.794 EXTRA_VAGRANTFILES= 00:21:57.794 NIC_MODEL=virtio 00:21:57.794 00:21:57.794 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt' 00:21:57.794 /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:22:00.342 Bringing machine 'default' up with 'libvirt' provider... 00:22:00.912 ==> default: Creating image (snapshot of base box volume). 00:22:01.172 ==> default: Creating domain with the following settings... 00:22:01.172 ==> default: -- Name: fedora38-38-1.6-1705279005-2131_default_1713341914_4da17ad4d0dbc89f8cd8 00:22:01.172 ==> default: -- Domain type: kvm 00:22:01.173 ==> default: -- Cpus: 10 00:22:01.173 ==> default: -- Feature: acpi 00:22:01.173 ==> default: -- Feature: apic 00:22:01.173 ==> default: -- Feature: pae 00:22:01.173 ==> default: -- Memory: 12288M 00:22:01.173 ==> default: -- Memory Backing: hugepages: 00:22:01.173 ==> default: -- Management MAC: 00:22:01.173 ==> default: -- Loader: 00:22:01.173 ==> default: -- Nvram: 00:22:01.173 ==> default: -- Base box: spdk/fedora38 00:22:01.173 ==> default: -- Storage pool: default 00:22:01.173 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1705279005-2131_default_1713341914_4da17ad4d0dbc89f8cd8.img (20G) 00:22:01.173 ==> default: -- Volume Cache: default 00:22:01.173 ==> default: -- Kernel: 00:22:01.173 ==> default: -- Initrd: 00:22:01.173 ==> default: -- Graphics Type: vnc 00:22:01.173 ==> default: -- Graphics Port: -1 00:22:01.173 ==> default: -- Graphics IP: 127.0.0.1 00:22:01.173 ==> default: -- Graphics Password: Not defined 00:22:01.173 ==> default: -- Video Type: cirrus 00:22:01.173 ==> default: -- Video VRAM: 9216 00:22:01.173 ==> default: -- Sound Type: 00:22:01.173 ==> default: -- Keymap: en-us 00:22:01.173 ==> default: -- TPM Path: 00:22:01.173 ==> default: -- INPUT: type=mouse, bus=ps2 00:22:01.173 ==> default: -- Command line args: 00:22:01.173 ==> default: -> value=-device, 00:22:01.173 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:22:01.173 ==> default: -> value=-drive, 00:22:01.173 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:22:01.173 ==> default: -> value=-device, 00:22:01.173 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:22:01.173 ==> default: -> value=-device, 00:22:01.173 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:22:01.173 ==> default: -> value=-drive, 00:22:01.173 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:22:01.173 ==> default: -> value=-device, 00:22:01.173 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:22:01.173 ==> default: -> value=-drive, 00:22:01.173 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:22:01.173 ==> default: -> value=-device, 00:22:01.173 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:22:01.173 ==> default: -> value=-drive, 00:22:01.173 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:22:01.173 ==> default: -> value=-device, 00:22:01.173 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:22:01.432 ==> default: Creating shared folders metadata... 00:22:01.432 ==> default: Starting domain. 00:22:02.812 ==> default: Waiting for domain to get an IP address... 00:22:20.934 ==> default: Waiting for SSH to become available... 00:22:20.934 ==> default: Configuring and enabling network interfaces... 00:22:26.209 default: SSH address: 192.168.121.176:22 00:22:26.209 default: SSH username: vagrant 00:22:26.209 default: SSH auth method: private key 00:22:29.496 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:22:37.620 ==> default: Mounting SSHFS shared folder... 00:22:39.528 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:22:39.528 ==> default: Checking Mount.. 00:22:40.902 ==> default: Folder Successfully Mounted! 00:22:40.902 ==> default: Running provisioner: file... 00:22:41.837 default: ~/.gitconfig => .gitconfig 00:22:42.096 00:22:42.096 SUCCESS! 00:22:42.096 00:22:42.096 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt and type "vagrant ssh" to use. 00:22:42.096 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:22:42.096 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt" to destroy all trace of vm. 00:22:42.096 00:22:42.105 [Pipeline] } 00:22:42.123 [Pipeline] // stage 00:22:42.132 [Pipeline] dir 00:22:42.132 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt 00:22:42.134 [Pipeline] { 00:22:42.148 [Pipeline] catchError 00:22:42.150 [Pipeline] { 00:22:42.163 [Pipeline] sh 00:22:42.455 + vagrant ssh-config --host vagrant 00:22:42.455 + sed -ne /^Host/,$p 00:22:42.455 + tee ssh_conf 00:22:45.758 Host vagrant 00:22:45.758 HostName 192.168.121.176 00:22:45.758 User vagrant 00:22:45.758 Port 22 00:22:45.758 UserKnownHostsFile /dev/null 00:22:45.758 StrictHostKeyChecking no 00:22:45.758 PasswordAuthentication no 00:22:45.758 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1705279005-2131/libvirt/fedora38 00:22:45.758 IdentitiesOnly yes 00:22:45.758 LogLevel FATAL 00:22:45.758 ForwardAgent yes 00:22:45.758 ForwardX11 yes 00:22:45.758 00:22:45.772 [Pipeline] withEnv 00:22:45.775 [Pipeline] { 00:22:45.791 [Pipeline] sh 00:22:46.071 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:22:46.071 source /etc/os-release 00:22:46.071 [[ -e /image.version ]] && img=$(< /image.version) 00:22:46.071 # Minimal, systemd-like check. 00:22:46.071 if [[ -e /.dockerenv ]]; then 00:22:46.071 # Clear garbage from the node's name: 00:22:46.071 # agt-er_autotest_547-896 -> autotest_547-896 00:22:46.071 # $HOSTNAME is the actual container id 00:22:46.071 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:22:46.071 if mountpoint -q /etc/hostname; then 00:22:46.071 # We can assume this is a mount from a host where container is running, 00:22:46.071 # so fetch its hostname to easily identify the target swarm worker. 00:22:46.071 container="$(< /etc/hostname) ($agent)" 00:22:46.071 else 00:22:46.071 # Fallback 00:22:46.072 container=$agent 00:22:46.072 fi 00:22:46.072 fi 00:22:46.072 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:22:46.072 00:22:46.342 [Pipeline] } 00:22:46.363 [Pipeline] // withEnv 00:22:46.372 [Pipeline] setCustomBuildProperty 00:22:46.387 [Pipeline] stage 00:22:46.389 [Pipeline] { (Tests) 00:22:46.409 [Pipeline] sh 00:22:46.689 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:22:46.961 [Pipeline] timeout 00:22:46.961 Timeout set to expire in 40 min 00:22:46.963 [Pipeline] { 00:22:46.978 [Pipeline] sh 00:22:47.261 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:22:47.829 HEAD is now at 36faa8c31 bdev/nvme: Fix the case that namespace was removed during reset 00:22:47.844 [Pipeline] sh 00:22:48.125 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:22:48.398 [Pipeline] sh 00:22:48.679 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:22:48.953 [Pipeline] sh 00:22:49.239 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:22:49.499 ++ readlink -f spdk_repo 00:22:49.499 + DIR_ROOT=/home/vagrant/spdk_repo 00:22:49.499 + [[ -n /home/vagrant/spdk_repo ]] 00:22:49.499 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:22:49.499 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:22:49.499 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:22:49.499 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:22:49.499 + [[ -d /home/vagrant/spdk_repo/output ]] 00:22:49.499 + cd /home/vagrant/spdk_repo 00:22:49.499 + source /etc/os-release 00:22:49.499 ++ NAME='Fedora Linux' 00:22:49.499 ++ VERSION='38 (Cloud Edition)' 00:22:49.499 ++ ID=fedora 00:22:49.499 ++ VERSION_ID=38 00:22:49.499 ++ VERSION_CODENAME= 00:22:49.499 ++ PLATFORM_ID=platform:f38 00:22:49.499 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:22:49.499 ++ ANSI_COLOR='0;38;2;60;110;180' 00:22:49.499 ++ LOGO=fedora-logo-icon 00:22:49.499 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:22:49.499 ++ HOME_URL=https://fedoraproject.org/ 00:22:49.499 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:22:49.499 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:22:49.499 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:22:49.499 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:22:49.499 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:22:49.499 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:22:49.499 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:22:49.499 ++ SUPPORT_END=2024-05-14 00:22:49.499 ++ VARIANT='Cloud Edition' 00:22:49.499 ++ VARIANT_ID=cloud 00:22:49.499 + uname -a 00:22:49.499 Linux fedora38-cloud-1705279005-2131 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:22:49.499 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:22:49.499 Hugepages 00:22:49.499 node hugesize free / total 00:22:49.499 node0 1048576kB 0 / 0 00:22:49.499 node0 2048kB 0 / 0 00:22:49.499 00:22:49.499 Type BDF Vendor Device NUMA Driver Device Block devices 00:22:49.760 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:22:49.760 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:22:49.760 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:22:49.760 + rm -f /tmp/spdk-ld-path 00:22:49.760 + source autorun-spdk.conf 00:22:49.760 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:22:49.760 ++ SPDK_TEST_NVMF=1 00:22:49.760 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:22:49.760 ++ SPDK_TEST_VFIOUSER=1 00:22:49.760 ++ SPDK_TEST_USDT=1 00:22:49.760 ++ SPDK_RUN_UBSAN=1 00:22:49.760 ++ SPDK_TEST_NVMF_MDNS=1 00:22:49.760 ++ NET_TYPE=virt 00:22:49.760 ++ SPDK_JSONRPC_GO_CLIENT=1 00:22:49.760 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:22:49.760 ++ RUN_NIGHTLY=1 00:22:49.760 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:22:49.760 + [[ -n '' ]] 00:22:49.760 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:22:49.760 + for M in /var/spdk/build-*-manifest.txt 00:22:49.760 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:22:49.760 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:22:49.760 + for M in /var/spdk/build-*-manifest.txt 00:22:49.760 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:22:49.760 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:22:49.760 ++ uname 00:22:49.760 + [[ Linux == \L\i\n\u\x ]] 00:22:49.760 + sudo dmesg -T 00:22:49.760 + sudo dmesg --clear 00:22:49.760 + dmesg_pid=5294 00:22:49.760 + [[ Fedora Linux == FreeBSD ]] 00:22:49.760 + sudo dmesg -Tw 00:22:49.760 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:22:49.760 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:22:49.760 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:22:49.760 + [[ -x /usr/src/fio-static/fio ]] 00:22:49.760 + export FIO_BIN=/usr/src/fio-static/fio 00:22:49.760 + FIO_BIN=/usr/src/fio-static/fio 00:22:49.760 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:22:49.760 + [[ ! -v VFIO_QEMU_BIN ]] 00:22:49.760 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:22:49.760 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:22:49.760 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:22:49.760 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:22:49.760 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:22:49.760 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:22:49.760 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:22:50.019 Test configuration: 00:22:50.019 SPDK_RUN_FUNCTIONAL_TEST=1 00:22:50.019 SPDK_TEST_NVMF=1 00:22:50.019 SPDK_TEST_NVMF_TRANSPORT=tcp 00:22:50.019 SPDK_TEST_VFIOUSER=1 00:22:50.019 SPDK_TEST_USDT=1 00:22:50.019 SPDK_RUN_UBSAN=1 00:22:50.019 SPDK_TEST_NVMF_MDNS=1 00:22:50.019 NET_TYPE=virt 00:22:50.019 SPDK_JSONRPC_GO_CLIENT=1 00:22:50.019 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:22:50.019 RUN_NIGHTLY=1 08:19:23 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:50.019 08:19:23 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:22:50.019 08:19:23 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:50.019 08:19:23 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:50.019 08:19:23 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.019 08:19:23 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.019 08:19:23 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.019 08:19:23 -- paths/export.sh@5 -- $ export PATH 00:22:50.019 08:19:23 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.019 08:19:23 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:22:50.019 08:19:23 -- common/autobuild_common.sh@435 -- $ date +%s 00:22:50.019 08:19:23 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713341963.XXXXXX 00:22:50.019 08:19:23 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713341963.3QMAdk 00:22:50.019 08:19:23 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:22:50.019 08:19:23 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:22:50.019 08:19:23 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:22:50.019 08:19:23 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:22:50.019 08:19:23 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:22:50.019 08:19:23 -- common/autobuild_common.sh@451 -- $ get_config_params 00:22:50.019 08:19:23 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:22:50.019 08:19:23 -- common/autotest_common.sh@10 -- $ set +x 00:22:50.019 08:19:23 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang' 00:22:50.019 08:19:23 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:22:50.019 08:19:23 -- spdk/autobuild.sh@12 -- $ umask 022 00:22:50.019 08:19:23 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:22:50.019 08:19:23 -- spdk/autobuild.sh@16 -- $ date -u 00:22:50.019 Wed Apr 17 08:19:23 AM UTC 2024 00:22:50.019 08:19:23 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:22:50.019 LTS-24-g36faa8c31 00:22:50.019 08:19:23 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:22:50.019 08:19:23 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:22:50.019 08:19:23 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:22:50.019 08:19:23 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:22:50.019 08:19:23 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:22:50.019 08:19:23 -- common/autotest_common.sh@10 -- $ set +x 00:22:50.019 ************************************ 00:22:50.019 START TEST ubsan 00:22:50.019 ************************************ 00:22:50.019 using ubsan 00:22:50.019 08:19:23 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:22:50.019 00:22:50.019 real 0m0.000s 00:22:50.019 user 0m0.000s 00:22:50.019 sys 0m0.000s 00:22:50.019 08:19:23 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:22:50.019 08:19:23 -- common/autotest_common.sh@10 -- $ set +x 00:22:50.019 ************************************ 00:22:50.019 END TEST ubsan 00:22:50.019 ************************************ 00:22:50.019 08:19:23 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:22:50.019 08:19:23 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:22:50.019 08:19:23 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:22:50.019 08:19:23 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:22:50.019 08:19:23 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:22:50.019 08:19:23 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:22:50.019 08:19:23 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:22:50.019 08:19:23 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:22:50.019 08:19:23 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang --with-shared 00:22:50.278 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:22:50.278 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:22:50.846 Using 'verbs' RDMA provider 00:23:06.297 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:23:21.185 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:23:21.185 go version go1.21.1 linux/amd64 00:23:21.185 Creating mk/config.mk...done. 00:23:21.185 Creating mk/cc.flags.mk...done. 00:23:21.185 Type 'make' to build. 00:23:21.185 08:19:52 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:23:21.185 08:19:52 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:23:21.185 08:19:52 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:23:21.185 08:19:52 -- common/autotest_common.sh@10 -- $ set +x 00:23:21.185 ************************************ 00:23:21.185 START TEST make 00:23:21.185 ************************************ 00:23:21.185 08:19:52 -- common/autotest_common.sh@1104 -- $ make -j10 00:23:21.185 make[1]: Nothing to be done for 'all'. 00:23:21.185 The Meson build system 00:23:21.185 Version: 1.3.1 00:23:21.185 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:23:21.185 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:23:21.185 Build type: native build 00:23:21.185 Project name: libvfio-user 00:23:21.185 Project version: 0.0.1 00:23:21.185 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:23:21.185 C linker for the host machine: cc ld.bfd 2.39-16 00:23:21.185 Host machine cpu family: x86_64 00:23:21.185 Host machine cpu: x86_64 00:23:21.185 Run-time dependency threads found: YES 00:23:21.185 Library dl found: YES 00:23:21.185 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:23:21.185 Run-time dependency json-c found: YES 0.17 00:23:21.185 Run-time dependency cmocka found: YES 1.1.7 00:23:21.185 Program pytest-3 found: NO 00:23:21.185 Program flake8 found: NO 00:23:21.185 Program misspell-fixer found: NO 00:23:21.185 Program restructuredtext-lint found: NO 00:23:21.185 Program valgrind found: YES (/usr/bin/valgrind) 00:23:21.185 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:23:21.185 Compiler for C supports arguments -Wmissing-declarations: YES 00:23:21.185 Compiler for C supports arguments -Wwrite-strings: YES 00:23:21.185 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:23:21.185 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:23:21.185 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:23:21.185 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:23:21.185 Build targets in project: 8 00:23:21.185 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:23:21.185 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:23:21.185 00:23:21.185 libvfio-user 0.0.1 00:23:21.185 00:23:21.185 User defined options 00:23:21.185 buildtype : debug 00:23:21.185 default_library: shared 00:23:21.185 libdir : /usr/local/lib 00:23:21.185 00:23:21.185 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:23:21.752 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:23:21.752 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:23:21.752 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:23:21.752 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:23:22.010 [4/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:23:22.011 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:23:22.011 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:23:22.011 [7/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:23:22.011 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:23:22.011 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:23:22.011 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:23:22.011 [11/37] Compiling C object samples/null.p/null.c.o 00:23:22.011 [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:23:22.011 [13/37] Compiling C object samples/lspci.p/lspci.c.o 00:23:22.011 [14/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:23:22.011 [15/37] Compiling C object samples/server.p/server.c.o 00:23:22.011 [16/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:23:22.268 [17/37] Compiling C object samples/client.p/client.c.o 00:23:22.268 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:23:22.268 [19/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:23:22.268 [20/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:23:22.268 [21/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:23:22.268 [22/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:23:22.268 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:23:22.268 [24/37] Linking target samples/client 00:23:22.268 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:23:22.268 [26/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:23:22.268 [27/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:23:22.268 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:23:22.268 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:23:22.268 [30/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:23:22.268 [31/37] Linking target test/unit_tests 00:23:22.526 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:23:22.526 [33/37] Linking target samples/null 00:23:22.526 [34/37] Linking target samples/gpio-pci-idio-16 00:23:22.526 [35/37] Linking target samples/shadow_ioeventfd_server 00:23:22.526 [36/37] Linking target samples/server 00:23:22.526 [37/37] Linking target samples/lspci 00:23:22.526 INFO: autodetecting backend as ninja 00:23:22.526 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:23:22.526 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:23:23.095 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:23:23.095 ninja: no work to do. 00:23:33.076 The Meson build system 00:23:33.076 Version: 1.3.1 00:23:33.076 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:23:33.076 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:23:33.076 Build type: native build 00:23:33.076 Program cat found: YES (/usr/bin/cat) 00:23:33.076 Project name: DPDK 00:23:33.076 Project version: 23.11.0 00:23:33.076 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:23:33.076 C linker for the host machine: cc ld.bfd 2.39-16 00:23:33.076 Host machine cpu family: x86_64 00:23:33.076 Host machine cpu: x86_64 00:23:33.076 Message: ## Building in Developer Mode ## 00:23:33.076 Program pkg-config found: YES (/usr/bin/pkg-config) 00:23:33.076 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:23:33.076 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:23:33.076 Program python3 found: YES (/usr/bin/python3) 00:23:33.076 Program cat found: YES (/usr/bin/cat) 00:23:33.076 Compiler for C supports arguments -march=native: YES 00:23:33.076 Checking for size of "void *" : 8 00:23:33.076 Checking for size of "void *" : 8 (cached) 00:23:33.076 Library m found: YES 00:23:33.076 Library numa found: YES 00:23:33.076 Has header "numaif.h" : YES 00:23:33.076 Library fdt found: NO 00:23:33.076 Library execinfo found: NO 00:23:33.076 Has header "execinfo.h" : YES 00:23:33.076 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:23:33.076 Run-time dependency libarchive found: NO (tried pkgconfig) 00:23:33.076 Run-time dependency libbsd found: NO (tried pkgconfig) 00:23:33.076 Run-time dependency jansson found: NO (tried pkgconfig) 00:23:33.076 Run-time dependency openssl found: YES 3.0.9 00:23:33.076 Run-time dependency libpcap found: YES 1.10.4 00:23:33.076 Has header "pcap.h" with dependency libpcap: YES 00:23:33.076 Compiler for C supports arguments -Wcast-qual: YES 00:23:33.076 Compiler for C supports arguments -Wdeprecated: YES 00:23:33.076 Compiler for C supports arguments -Wformat: YES 00:23:33.076 Compiler for C supports arguments -Wformat-nonliteral: NO 00:23:33.076 Compiler for C supports arguments -Wformat-security: NO 00:23:33.076 Compiler for C supports arguments -Wmissing-declarations: YES 00:23:33.076 Compiler for C supports arguments -Wmissing-prototypes: YES 00:23:33.076 Compiler for C supports arguments -Wnested-externs: YES 00:23:33.076 Compiler for C supports arguments -Wold-style-definition: YES 00:23:33.076 Compiler for C supports arguments -Wpointer-arith: YES 00:23:33.076 Compiler for C supports arguments -Wsign-compare: YES 00:23:33.076 Compiler for C supports arguments -Wstrict-prototypes: YES 00:23:33.076 Compiler for C supports arguments -Wundef: YES 00:23:33.076 Compiler for C supports arguments -Wwrite-strings: YES 00:23:33.076 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:23:33.076 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:23:33.076 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:23:33.076 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:23:33.076 Program objdump found: YES (/usr/bin/objdump) 00:23:33.076 Compiler for C supports arguments -mavx512f: YES 00:23:33.076 Checking if "AVX512 checking" compiles: YES 00:23:33.076 Fetching value of define "__SSE4_2__" : 1 00:23:33.076 Fetching value of define "__AES__" : 1 00:23:33.076 Fetching value of define "__AVX__" : 1 00:23:33.076 Fetching value of define "__AVX2__" : 1 00:23:33.076 Fetching value of define "__AVX512BW__" : 1 00:23:33.076 Fetching value of define "__AVX512CD__" : 1 00:23:33.076 Fetching value of define "__AVX512DQ__" : 1 00:23:33.076 Fetching value of define "__AVX512F__" : 1 00:23:33.076 Fetching value of define "__AVX512VL__" : 1 00:23:33.076 Fetching value of define "__PCLMUL__" : 1 00:23:33.076 Fetching value of define "__RDRND__" : 1 00:23:33.076 Fetching value of define "__RDSEED__" : 1 00:23:33.076 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:23:33.076 Fetching value of define "__znver1__" : (undefined) 00:23:33.076 Fetching value of define "__znver2__" : (undefined) 00:23:33.076 Fetching value of define "__znver3__" : (undefined) 00:23:33.076 Fetching value of define "__znver4__" : (undefined) 00:23:33.076 Compiler for C supports arguments -Wno-format-truncation: YES 00:23:33.076 Message: lib/log: Defining dependency "log" 00:23:33.076 Message: lib/kvargs: Defining dependency "kvargs" 00:23:33.076 Message: lib/telemetry: Defining dependency "telemetry" 00:23:33.076 Checking for function "getentropy" : NO 00:23:33.076 Message: lib/eal: Defining dependency "eal" 00:23:33.076 Message: lib/ring: Defining dependency "ring" 00:23:33.076 Message: lib/rcu: Defining dependency "rcu" 00:23:33.076 Message: lib/mempool: Defining dependency "mempool" 00:23:33.076 Message: lib/mbuf: Defining dependency "mbuf" 00:23:33.076 Fetching value of define "__PCLMUL__" : 1 (cached) 00:23:33.076 Fetching value of define "__AVX512F__" : 1 (cached) 00:23:33.076 Fetching value of define "__AVX512BW__" : 1 (cached) 00:23:33.076 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:23:33.076 Fetching value of define "__AVX512VL__" : 1 (cached) 00:23:33.076 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:23:33.076 Compiler for C supports arguments -mpclmul: YES 00:23:33.076 Compiler for C supports arguments -maes: YES 00:23:33.076 Compiler for C supports arguments -mavx512f: YES (cached) 00:23:33.076 Compiler for C supports arguments -mavx512bw: YES 00:23:33.076 Compiler for C supports arguments -mavx512dq: YES 00:23:33.076 Compiler for C supports arguments -mavx512vl: YES 00:23:33.076 Compiler for C supports arguments -mvpclmulqdq: YES 00:23:33.076 Compiler for C supports arguments -mavx2: YES 00:23:33.076 Compiler for C supports arguments -mavx: YES 00:23:33.076 Message: lib/net: Defining dependency "net" 00:23:33.076 Message: lib/meter: Defining dependency "meter" 00:23:33.076 Message: lib/ethdev: Defining dependency "ethdev" 00:23:33.076 Message: lib/pci: Defining dependency "pci" 00:23:33.076 Message: lib/cmdline: Defining dependency "cmdline" 00:23:33.076 Message: lib/hash: Defining dependency "hash" 00:23:33.076 Message: lib/timer: Defining dependency "timer" 00:23:33.076 Message: lib/compressdev: Defining dependency "compressdev" 00:23:33.076 Message: lib/cryptodev: Defining dependency "cryptodev" 00:23:33.076 Message: lib/dmadev: Defining dependency "dmadev" 00:23:33.076 Compiler for C supports arguments -Wno-cast-qual: YES 00:23:33.076 Message: lib/power: Defining dependency "power" 00:23:33.076 Message: lib/reorder: Defining dependency "reorder" 00:23:33.076 Message: lib/security: Defining dependency "security" 00:23:33.076 Has header "linux/userfaultfd.h" : YES 00:23:33.076 Has header "linux/vduse.h" : YES 00:23:33.076 Message: lib/vhost: Defining dependency "vhost" 00:23:33.076 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:23:33.076 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:23:33.076 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:23:33.076 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:23:33.076 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:23:33.076 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:23:33.076 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:23:33.076 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:23:33.076 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:23:33.076 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:23:33.076 Program doxygen found: YES (/usr/bin/doxygen) 00:23:33.076 Configuring doxy-api-html.conf using configuration 00:23:33.076 Configuring doxy-api-man.conf using configuration 00:23:33.076 Program mandb found: YES (/usr/bin/mandb) 00:23:33.076 Program sphinx-build found: NO 00:23:33.076 Configuring rte_build_config.h using configuration 00:23:33.076 Message: 00:23:33.076 ================= 00:23:33.077 Applications Enabled 00:23:33.077 ================= 00:23:33.077 00:23:33.077 apps: 00:23:33.077 00:23:33.077 00:23:33.077 Message: 00:23:33.077 ================= 00:23:33.077 Libraries Enabled 00:23:33.077 ================= 00:23:33.077 00:23:33.077 libs: 00:23:33.077 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:23:33.077 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:23:33.077 cryptodev, dmadev, power, reorder, security, vhost, 00:23:33.077 00:23:33.077 Message: 00:23:33.077 =============== 00:23:33.077 Drivers Enabled 00:23:33.077 =============== 00:23:33.077 00:23:33.077 common: 00:23:33.077 00:23:33.077 bus: 00:23:33.077 pci, vdev, 00:23:33.077 mempool: 00:23:33.077 ring, 00:23:33.077 dma: 00:23:33.077 00:23:33.077 net: 00:23:33.077 00:23:33.077 crypto: 00:23:33.077 00:23:33.077 compress: 00:23:33.077 00:23:33.077 vdpa: 00:23:33.077 00:23:33.077 00:23:33.077 Message: 00:23:33.077 ================= 00:23:33.077 Content Skipped 00:23:33.077 ================= 00:23:33.077 00:23:33.077 apps: 00:23:33.077 dumpcap: explicitly disabled via build config 00:23:33.077 graph: explicitly disabled via build config 00:23:33.077 pdump: explicitly disabled via build config 00:23:33.077 proc-info: explicitly disabled via build config 00:23:33.077 test-acl: explicitly disabled via build config 00:23:33.077 test-bbdev: explicitly disabled via build config 00:23:33.077 test-cmdline: explicitly disabled via build config 00:23:33.077 test-compress-perf: explicitly disabled via build config 00:23:33.077 test-crypto-perf: explicitly disabled via build config 00:23:33.077 test-dma-perf: explicitly disabled via build config 00:23:33.077 test-eventdev: explicitly disabled via build config 00:23:33.077 test-fib: explicitly disabled via build config 00:23:33.077 test-flow-perf: explicitly disabled via build config 00:23:33.077 test-gpudev: explicitly disabled via build config 00:23:33.077 test-mldev: explicitly disabled via build config 00:23:33.077 test-pipeline: explicitly disabled via build config 00:23:33.077 test-pmd: explicitly disabled via build config 00:23:33.077 test-regex: explicitly disabled via build config 00:23:33.077 test-sad: explicitly disabled via build config 00:23:33.077 test-security-perf: explicitly disabled via build config 00:23:33.077 00:23:33.077 libs: 00:23:33.077 metrics: explicitly disabled via build config 00:23:33.077 acl: explicitly disabled via build config 00:23:33.077 bbdev: explicitly disabled via build config 00:23:33.077 bitratestats: explicitly disabled via build config 00:23:33.077 bpf: explicitly disabled via build config 00:23:33.077 cfgfile: explicitly disabled via build config 00:23:33.077 distributor: explicitly disabled via build config 00:23:33.077 efd: explicitly disabled via build config 00:23:33.077 eventdev: explicitly disabled via build config 00:23:33.077 dispatcher: explicitly disabled via build config 00:23:33.077 gpudev: explicitly disabled via build config 00:23:33.077 gro: explicitly disabled via build config 00:23:33.077 gso: explicitly disabled via build config 00:23:33.077 ip_frag: explicitly disabled via build config 00:23:33.077 jobstats: explicitly disabled via build config 00:23:33.077 latencystats: explicitly disabled via build config 00:23:33.077 lpm: explicitly disabled via build config 00:23:33.077 member: explicitly disabled via build config 00:23:33.077 pcapng: explicitly disabled via build config 00:23:33.077 rawdev: explicitly disabled via build config 00:23:33.077 regexdev: explicitly disabled via build config 00:23:33.077 mldev: explicitly disabled via build config 00:23:33.077 rib: explicitly disabled via build config 00:23:33.077 sched: explicitly disabled via build config 00:23:33.077 stack: explicitly disabled via build config 00:23:33.077 ipsec: explicitly disabled via build config 00:23:33.077 pdcp: explicitly disabled via build config 00:23:33.077 fib: explicitly disabled via build config 00:23:33.077 port: explicitly disabled via build config 00:23:33.077 pdump: explicitly disabled via build config 00:23:33.077 table: explicitly disabled via build config 00:23:33.077 pipeline: explicitly disabled via build config 00:23:33.077 graph: explicitly disabled via build config 00:23:33.077 node: explicitly disabled via build config 00:23:33.077 00:23:33.077 drivers: 00:23:33.077 common/cpt: not in enabled drivers build config 00:23:33.077 common/dpaax: not in enabled drivers build config 00:23:33.077 common/iavf: not in enabled drivers build config 00:23:33.077 common/idpf: not in enabled drivers build config 00:23:33.077 common/mvep: not in enabled drivers build config 00:23:33.077 common/octeontx: not in enabled drivers build config 00:23:33.077 bus/auxiliary: not in enabled drivers build config 00:23:33.077 bus/cdx: not in enabled drivers build config 00:23:33.077 bus/dpaa: not in enabled drivers build config 00:23:33.077 bus/fslmc: not in enabled drivers build config 00:23:33.077 bus/ifpga: not in enabled drivers build config 00:23:33.077 bus/platform: not in enabled drivers build config 00:23:33.077 bus/vmbus: not in enabled drivers build config 00:23:33.077 common/cnxk: not in enabled drivers build config 00:23:33.077 common/mlx5: not in enabled drivers build config 00:23:33.077 common/nfp: not in enabled drivers build config 00:23:33.077 common/qat: not in enabled drivers build config 00:23:33.077 common/sfc_efx: not in enabled drivers build config 00:23:33.077 mempool/bucket: not in enabled drivers build config 00:23:33.077 mempool/cnxk: not in enabled drivers build config 00:23:33.077 mempool/dpaa: not in enabled drivers build config 00:23:33.077 mempool/dpaa2: not in enabled drivers build config 00:23:33.077 mempool/octeontx: not in enabled drivers build config 00:23:33.077 mempool/stack: not in enabled drivers build config 00:23:33.077 dma/cnxk: not in enabled drivers build config 00:23:33.077 dma/dpaa: not in enabled drivers build config 00:23:33.077 dma/dpaa2: not in enabled drivers build config 00:23:33.077 dma/hisilicon: not in enabled drivers build config 00:23:33.077 dma/idxd: not in enabled drivers build config 00:23:33.077 dma/ioat: not in enabled drivers build config 00:23:33.077 dma/skeleton: not in enabled drivers build config 00:23:33.077 net/af_packet: not in enabled drivers build config 00:23:33.077 net/af_xdp: not in enabled drivers build config 00:23:33.077 net/ark: not in enabled drivers build config 00:23:33.077 net/atlantic: not in enabled drivers build config 00:23:33.077 net/avp: not in enabled drivers build config 00:23:33.077 net/axgbe: not in enabled drivers build config 00:23:33.077 net/bnx2x: not in enabled drivers build config 00:23:33.077 net/bnxt: not in enabled drivers build config 00:23:33.077 net/bonding: not in enabled drivers build config 00:23:33.077 net/cnxk: not in enabled drivers build config 00:23:33.077 net/cpfl: not in enabled drivers build config 00:23:33.077 net/cxgbe: not in enabled drivers build config 00:23:33.077 net/dpaa: not in enabled drivers build config 00:23:33.077 net/dpaa2: not in enabled drivers build config 00:23:33.077 net/e1000: not in enabled drivers build config 00:23:33.077 net/ena: not in enabled drivers build config 00:23:33.077 net/enetc: not in enabled drivers build config 00:23:33.077 net/enetfec: not in enabled drivers build config 00:23:33.077 net/enic: not in enabled drivers build config 00:23:33.077 net/failsafe: not in enabled drivers build config 00:23:33.077 net/fm10k: not in enabled drivers build config 00:23:33.077 net/gve: not in enabled drivers build config 00:23:33.077 net/hinic: not in enabled drivers build config 00:23:33.077 net/hns3: not in enabled drivers build config 00:23:33.077 net/i40e: not in enabled drivers build config 00:23:33.077 net/iavf: not in enabled drivers build config 00:23:33.077 net/ice: not in enabled drivers build config 00:23:33.077 net/idpf: not in enabled drivers build config 00:23:33.077 net/igc: not in enabled drivers build config 00:23:33.077 net/ionic: not in enabled drivers build config 00:23:33.077 net/ipn3ke: not in enabled drivers build config 00:23:33.077 net/ixgbe: not in enabled drivers build config 00:23:33.077 net/mana: not in enabled drivers build config 00:23:33.077 net/memif: not in enabled drivers build config 00:23:33.077 net/mlx4: not in enabled drivers build config 00:23:33.077 net/mlx5: not in enabled drivers build config 00:23:33.077 net/mvneta: not in enabled drivers build config 00:23:33.077 net/mvpp2: not in enabled drivers build config 00:23:33.077 net/netvsc: not in enabled drivers build config 00:23:33.077 net/nfb: not in enabled drivers build config 00:23:33.077 net/nfp: not in enabled drivers build config 00:23:33.077 net/ngbe: not in enabled drivers build config 00:23:33.077 net/null: not in enabled drivers build config 00:23:33.077 net/octeontx: not in enabled drivers build config 00:23:33.077 net/octeon_ep: not in enabled drivers build config 00:23:33.077 net/pcap: not in enabled drivers build config 00:23:33.077 net/pfe: not in enabled drivers build config 00:23:33.077 net/qede: not in enabled drivers build config 00:23:33.077 net/ring: not in enabled drivers build config 00:23:33.077 net/sfc: not in enabled drivers build config 00:23:33.077 net/softnic: not in enabled drivers build config 00:23:33.077 net/tap: not in enabled drivers build config 00:23:33.077 net/thunderx: not in enabled drivers build config 00:23:33.077 net/txgbe: not in enabled drivers build config 00:23:33.077 net/vdev_netvsc: not in enabled drivers build config 00:23:33.077 net/vhost: not in enabled drivers build config 00:23:33.077 net/virtio: not in enabled drivers build config 00:23:33.077 net/vmxnet3: not in enabled drivers build config 00:23:33.077 raw/*: missing internal dependency, "rawdev" 00:23:33.077 crypto/armv8: not in enabled drivers build config 00:23:33.077 crypto/bcmfs: not in enabled drivers build config 00:23:33.077 crypto/caam_jr: not in enabled drivers build config 00:23:33.077 crypto/ccp: not in enabled drivers build config 00:23:33.077 crypto/cnxk: not in enabled drivers build config 00:23:33.077 crypto/dpaa_sec: not in enabled drivers build config 00:23:33.077 crypto/dpaa2_sec: not in enabled drivers build config 00:23:33.077 crypto/ipsec_mb: not in enabled drivers build config 00:23:33.077 crypto/mlx5: not in enabled drivers build config 00:23:33.077 crypto/mvsam: not in enabled drivers build config 00:23:33.077 crypto/nitrox: not in enabled drivers build config 00:23:33.077 crypto/null: not in enabled drivers build config 00:23:33.078 crypto/octeontx: not in enabled drivers build config 00:23:33.078 crypto/openssl: not in enabled drivers build config 00:23:33.078 crypto/scheduler: not in enabled drivers build config 00:23:33.078 crypto/uadk: not in enabled drivers build config 00:23:33.078 crypto/virtio: not in enabled drivers build config 00:23:33.078 compress/isal: not in enabled drivers build config 00:23:33.078 compress/mlx5: not in enabled drivers build config 00:23:33.078 compress/octeontx: not in enabled drivers build config 00:23:33.078 compress/zlib: not in enabled drivers build config 00:23:33.078 regex/*: missing internal dependency, "regexdev" 00:23:33.078 ml/*: missing internal dependency, "mldev" 00:23:33.078 vdpa/ifc: not in enabled drivers build config 00:23:33.078 vdpa/mlx5: not in enabled drivers build config 00:23:33.078 vdpa/nfp: not in enabled drivers build config 00:23:33.078 vdpa/sfc: not in enabled drivers build config 00:23:33.078 event/*: missing internal dependency, "eventdev" 00:23:33.078 baseband/*: missing internal dependency, "bbdev" 00:23:33.078 gpu/*: missing internal dependency, "gpudev" 00:23:33.078 00:23:33.078 00:23:33.078 Build targets in project: 85 00:23:33.078 00:23:33.078 DPDK 23.11.0 00:23:33.078 00:23:33.078 User defined options 00:23:33.078 buildtype : debug 00:23:33.078 default_library : shared 00:23:33.078 libdir : lib 00:23:33.078 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:23:33.078 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:23:33.078 c_link_args : 00:23:33.078 cpu_instruction_set: native 00:23:33.078 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:23:33.078 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:23:33.078 enable_docs : false 00:23:33.078 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:23:33.078 enable_kmods : false 00:23:33.078 tests : false 00:23:33.078 00:23:33.078 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:23:33.078 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:23:33.078 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:23:33.078 [2/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:23:33.078 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:23:33.078 [4/265] Linking static target lib/librte_kvargs.a 00:23:33.078 [5/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:23:33.078 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:23:33.078 [7/265] Linking static target lib/librte_log.a 00:23:33.078 [8/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:23:33.078 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:23:33.078 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:23:33.078 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:23:33.078 [12/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:23:33.336 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:23:33.336 [14/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:23:33.336 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:23:33.336 [16/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:23:33.336 [17/265] Linking static target lib/librte_telemetry.a 00:23:33.336 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:23:33.593 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:23:33.593 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:23:33.593 [21/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:23:33.593 [22/265] Linking target lib/librte_log.so.24.0 00:23:33.593 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:23:33.593 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:23:33.852 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:23:33.852 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:23:33.852 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:23:33.852 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:23:33.852 [29/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:23:33.852 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:23:34.112 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:23:34.112 [32/265] Linking target lib/librte_kvargs.so.24.0 00:23:34.112 [33/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:23:34.112 [34/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:23:34.370 [35/265] Linking target lib/librte_telemetry.so.24.0 00:23:34.370 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:23:34.370 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:23:34.370 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:23:34.370 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:23:34.370 [40/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:23:34.370 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:23:34.629 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:23:34.629 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:23:34.629 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:23:34.629 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:23:34.629 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:23:34.629 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:23:34.888 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:23:34.888 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:23:34.888 [50/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:23:34.888 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:23:35.146 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:23:35.146 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:23:35.146 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:23:35.146 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:23:35.422 [56/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:23:35.422 [57/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:23:35.422 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:23:35.422 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:23:35.422 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:23:35.422 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:23:35.422 [62/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:23:35.422 [63/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:23:35.695 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:23:35.695 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:23:35.953 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:23:35.953 [67/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:23:35.953 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:23:35.953 [69/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:23:36.212 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:23:36.212 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:23:36.212 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:23:36.212 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:23:36.212 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:23:36.212 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:23:36.212 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:23:36.212 [77/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:23:36.212 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:23:36.471 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:23:36.471 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:23:36.471 [81/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:23:36.471 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:23:36.730 [83/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:23:36.730 [84/265] Linking static target lib/librte_ring.a 00:23:36.730 [85/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:23:36.730 [86/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:23:36.730 [87/265] Linking static target lib/librte_eal.a 00:23:36.989 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:23:36.989 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:23:36.989 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:23:36.989 [91/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:23:36.989 [92/265] Linking static target lib/librte_rcu.a 00:23:37.247 [93/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:23:37.247 [94/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:23:37.247 [95/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:23:37.247 [96/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:23:37.247 [97/265] Linking static target lib/librte_mempool.a 00:23:37.505 [98/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:23:37.505 [99/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:23:37.505 [100/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:23:37.505 [101/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:23:37.763 [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:23:37.763 [103/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:23:37.763 [104/265] Linking static target lib/librte_mbuf.a 00:23:37.763 [105/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:23:37.763 [106/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:23:37.763 [107/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:23:38.022 [108/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:23:38.022 [109/265] Linking static target lib/librte_net.a 00:23:38.280 [110/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:23:38.280 [111/265] Linking static target lib/librte_meter.a 00:23:38.280 [112/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:23:38.280 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:23:38.539 [114/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:23:38.539 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:23:38.539 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:23:38.539 [117/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:23:38.797 [118/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:23:38.797 [119/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:23:39.098 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:23:39.664 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:23:39.664 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:23:39.664 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:23:39.664 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:23:39.664 [125/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:23:39.664 [126/265] Linking static target lib/librte_pci.a 00:23:39.664 [127/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:23:39.664 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:23:39.664 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:23:39.922 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:23:39.922 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:23:39.922 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:23:39.922 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:23:39.922 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:23:40.181 [135/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:23:40.181 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:23:40.181 [137/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:23:40.181 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:23:40.181 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:23:40.181 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:23:40.181 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:23:40.439 [142/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:23:40.439 [143/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:23:40.439 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:23:40.439 [145/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:23:40.698 [146/265] Linking static target lib/librte_cmdline.a 00:23:40.698 [147/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:23:40.956 [148/265] Linking static target lib/librte_timer.a 00:23:40.956 [149/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:23:40.956 [150/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:23:40.956 [151/265] Linking static target lib/librte_ethdev.a 00:23:40.956 [152/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:23:40.956 [153/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:23:41.215 [154/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:23:41.215 [155/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:23:41.215 [156/265] Linking static target lib/librte_compressdev.a 00:23:41.215 [157/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:23:41.215 [158/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:23:41.215 [159/265] Linking static target lib/librte_hash.a 00:23:41.474 [160/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:23:41.474 [161/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:23:41.474 [162/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:23:41.732 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:23:41.732 [164/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:23:41.732 [165/265] Linking static target lib/librte_dmadev.a 00:23:41.732 [166/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:23:41.990 [167/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:23:41.990 [168/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:23:41.990 [169/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:23:41.990 [170/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:23:42.248 [171/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:23:42.248 [172/265] Linking static target lib/librte_cryptodev.a 00:23:42.248 [173/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:23:42.248 [174/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:23:42.248 [175/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:23:42.248 [176/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:23:42.582 [177/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:23:42.582 [178/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:23:42.582 [179/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:23:42.582 [180/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:23:42.582 [181/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:23:42.840 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:23:42.840 [183/265] Linking static target lib/librte_power.a 00:23:42.840 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:23:42.840 [185/265] Linking static target lib/librte_reorder.a 00:23:42.840 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:23:43.097 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:23:43.097 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:23:43.097 [189/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:23:43.097 [190/265] Linking static target lib/librte_security.a 00:23:43.355 [191/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:23:43.355 [192/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:23:43.918 [193/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:23:43.919 [194/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:23:43.919 [195/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:23:43.919 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:23:43.919 [197/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:23:43.919 [198/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:23:44.176 [199/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:23:44.434 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:23:44.434 [201/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:23:44.434 [202/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:23:44.434 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:23:44.434 [204/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:23:44.434 [205/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:23:44.434 [206/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:23:44.692 [207/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:23:44.692 [208/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:23:44.692 [209/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:23:44.692 [210/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:23:44.692 [211/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:23:44.692 [212/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:23:44.692 [213/265] Linking static target drivers/librte_bus_pci.a 00:23:44.950 [214/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:23:44.950 [215/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:23:44.950 [216/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:23:44.950 [217/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:23:44.950 [218/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:23:44.950 [219/265] Linking static target drivers/librte_bus_vdev.a 00:23:45.207 [220/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:23:45.207 [221/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:23:45.207 [222/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:23:45.207 [223/265] Linking static target drivers/librte_mempool_ring.a 00:23:45.207 [224/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:23:45.207 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:23:46.158 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:23:46.158 [227/265] Linking static target lib/librte_vhost.a 00:23:48.091 [228/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:23:48.091 [229/265] Linking target lib/librte_eal.so.24.0 00:23:48.091 [230/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:23:48.091 [231/265] Linking target lib/librte_timer.so.24.0 00:23:48.091 [232/265] Linking target lib/librte_pci.so.24.0 00:23:48.091 [233/265] Linking target drivers/librte_bus_vdev.so.24.0 00:23:48.091 [234/265] Linking target lib/librte_meter.so.24.0 00:23:48.091 [235/265] Linking target lib/librte_ring.so.24.0 00:23:48.091 [236/265] Linking target lib/librte_dmadev.so.24.0 00:23:48.091 [237/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:23:48.351 [238/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:23:48.351 [239/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:23:48.351 [240/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:23:48.351 [241/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:23:48.351 [242/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:23:48.351 [243/265] Linking target lib/librte_mempool.so.24.0 00:23:48.351 [244/265] Linking target lib/librte_rcu.so.24.0 00:23:48.351 [245/265] Linking target drivers/librte_bus_pci.so.24.0 00:23:48.351 [246/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:23:48.351 [247/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:23:48.351 [248/265] Linking target lib/librte_mbuf.so.24.0 00:23:48.351 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:23:48.609 [250/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:23:48.609 [251/265] Linking target lib/librte_net.so.24.0 00:23:48.609 [252/265] Linking target lib/librte_compressdev.so.24.0 00:23:48.609 [253/265] Linking target lib/librte_reorder.so.24.0 00:23:48.609 [254/265] Linking target lib/librte_cryptodev.so.24.0 00:23:48.869 [255/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:23:48.869 [256/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:23:48.869 [257/265] Linking target lib/librte_hash.so.24.0 00:23:48.869 [258/265] Linking target lib/librte_cmdline.so.24.0 00:23:48.869 [259/265] Linking target lib/librte_security.so.24.0 00:23:49.128 [260/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:23:49.387 [261/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:23:49.387 [262/265] Linking target lib/librte_ethdev.so.24.0 00:23:49.647 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:23:49.647 [264/265] Linking target lib/librte_power.so.24.0 00:23:49.647 [265/265] Linking target lib/librte_vhost.so.24.0 00:23:49.647 INFO: autodetecting backend as ninja 00:23:49.647 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:23:50.586 CC lib/ut_mock/mock.o 00:23:50.586 CC lib/log/log.o 00:23:50.586 CC lib/log/log_flags.o 00:23:50.586 CC lib/log/log_deprecated.o 00:23:50.586 CC lib/ut/ut.o 00:23:50.846 LIB libspdk_ut_mock.a 00:23:50.846 SO libspdk_ut_mock.so.5.0 00:23:50.846 LIB libspdk_log.a 00:23:50.846 LIB libspdk_ut.a 00:23:50.846 SO libspdk_ut.so.1.0 00:23:50.846 SO libspdk_log.so.6.1 00:23:50.846 SYMLINK libspdk_ut_mock.so 00:23:50.846 SYMLINK libspdk_ut.so 00:23:50.846 SYMLINK libspdk_log.so 00:23:51.113 CXX lib/trace_parser/trace.o 00:23:51.113 CC lib/util/base64.o 00:23:51.113 CC lib/util/bit_array.o 00:23:51.113 CC lib/util/cpuset.o 00:23:51.113 CC lib/util/crc16.o 00:23:51.113 CC lib/util/crc32c.o 00:23:51.113 CC lib/util/crc32.o 00:23:51.113 CC lib/dma/dma.o 00:23:51.113 CC lib/ioat/ioat.o 00:23:51.113 CC lib/vfio_user/host/vfio_user_pci.o 00:23:51.113 CC lib/util/crc32_ieee.o 00:23:51.113 CC lib/util/crc64.o 00:23:51.113 CC lib/util/dif.o 00:23:51.371 CC lib/util/fd.o 00:23:51.371 LIB libspdk_dma.a 00:23:51.371 CC lib/vfio_user/host/vfio_user.o 00:23:51.371 CC lib/util/file.o 00:23:51.371 SO libspdk_dma.so.3.0 00:23:51.371 CC lib/util/hexlify.o 00:23:51.371 CC lib/util/iov.o 00:23:51.371 SYMLINK libspdk_dma.so 00:23:51.371 CC lib/util/math.o 00:23:51.371 CC lib/util/pipe.o 00:23:51.371 LIB libspdk_ioat.a 00:23:51.371 CC lib/util/strerror_tls.o 00:23:51.371 CC lib/util/string.o 00:23:51.371 SO libspdk_ioat.so.6.0 00:23:51.371 LIB libspdk_vfio_user.a 00:23:51.371 CC lib/util/uuid.o 00:23:51.371 SYMLINK libspdk_ioat.so 00:23:51.371 CC lib/util/fd_group.o 00:23:51.371 SO libspdk_vfio_user.so.4.0 00:23:51.630 CC lib/util/xor.o 00:23:51.631 CC lib/util/zipf.o 00:23:51.631 SYMLINK libspdk_vfio_user.so 00:23:51.631 LIB libspdk_util.a 00:23:51.890 SO libspdk_util.so.8.0 00:23:51.890 LIB libspdk_trace_parser.a 00:23:51.890 SYMLINK libspdk_util.so 00:23:51.890 SO libspdk_trace_parser.so.4.0 00:23:52.150 CC lib/idxd/idxd.o 00:23:52.150 CC lib/vmd/vmd.o 00:23:52.150 CC lib/vmd/led.o 00:23:52.150 CC lib/rdma/common.o 00:23:52.150 CC lib/rdma/rdma_verbs.o 00:23:52.150 CC lib/idxd/idxd_user.o 00:23:52.150 CC lib/conf/conf.o 00:23:52.150 CC lib/env_dpdk/env.o 00:23:52.150 CC lib/json/json_parse.o 00:23:52.150 SYMLINK libspdk_trace_parser.so 00:23:52.150 CC lib/json/json_util.o 00:23:52.150 CC lib/json/json_write.o 00:23:52.409 CC lib/env_dpdk/memory.o 00:23:52.409 LIB libspdk_conf.a 00:23:52.409 CC lib/env_dpdk/pci.o 00:23:52.409 CC lib/env_dpdk/init.o 00:23:52.409 CC lib/env_dpdk/threads.o 00:23:52.409 SO libspdk_conf.so.5.0 00:23:52.409 LIB libspdk_rdma.a 00:23:52.409 SYMLINK libspdk_conf.so 00:23:52.409 CC lib/env_dpdk/pci_ioat.o 00:23:52.409 SO libspdk_rdma.so.5.0 00:23:52.409 CC lib/env_dpdk/pci_virtio.o 00:23:52.409 SYMLINK libspdk_rdma.so 00:23:52.409 CC lib/env_dpdk/pci_vmd.o 00:23:52.409 LIB libspdk_json.a 00:23:52.409 SO libspdk_json.so.5.1 00:23:52.409 CC lib/env_dpdk/pci_idxd.o 00:23:52.668 LIB libspdk_idxd.a 00:23:52.668 CC lib/env_dpdk/pci_event.o 00:23:52.668 SO libspdk_idxd.so.11.0 00:23:52.668 SYMLINK libspdk_json.so 00:23:52.668 CC lib/env_dpdk/sigbus_handler.o 00:23:52.668 CC lib/env_dpdk/pci_dpdk.o 00:23:52.668 CC lib/env_dpdk/pci_dpdk_2207.o 00:23:52.668 SYMLINK libspdk_idxd.so 00:23:52.668 CC lib/env_dpdk/pci_dpdk_2211.o 00:23:52.668 LIB libspdk_vmd.a 00:23:52.668 SO libspdk_vmd.so.5.0 00:23:52.668 CC lib/jsonrpc/jsonrpc_server.o 00:23:52.668 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:23:52.668 CC lib/jsonrpc/jsonrpc_client.o 00:23:52.668 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:23:52.668 SYMLINK libspdk_vmd.so 00:23:52.945 LIB libspdk_jsonrpc.a 00:23:52.945 SO libspdk_jsonrpc.so.5.1 00:23:53.205 SYMLINK libspdk_jsonrpc.so 00:23:53.205 LIB libspdk_env_dpdk.a 00:23:53.464 CC lib/rpc/rpc.o 00:23:53.464 SO libspdk_env_dpdk.so.13.0 00:23:53.464 LIB libspdk_rpc.a 00:23:53.464 SYMLINK libspdk_env_dpdk.so 00:23:53.723 SO libspdk_rpc.so.5.0 00:23:53.723 SYMLINK libspdk_rpc.so 00:23:53.982 CC lib/trace/trace.o 00:23:53.982 CC lib/trace/trace_rpc.o 00:23:53.982 CC lib/notify/notify.o 00:23:53.982 CC lib/trace/trace_flags.o 00:23:53.982 CC lib/notify/notify_rpc.o 00:23:53.982 CC lib/sock/sock.o 00:23:53.982 CC lib/sock/sock_rpc.o 00:23:54.241 LIB libspdk_notify.a 00:23:54.241 LIB libspdk_trace.a 00:23:54.241 SO libspdk_notify.so.5.0 00:23:54.241 SO libspdk_trace.so.9.0 00:23:54.241 LIB libspdk_sock.a 00:23:54.241 SYMLINK libspdk_notify.so 00:23:54.241 SYMLINK libspdk_trace.so 00:23:54.241 SO libspdk_sock.so.8.0 00:23:54.500 SYMLINK libspdk_sock.so 00:23:54.500 CC lib/thread/thread.o 00:23:54.500 CC lib/thread/iobuf.o 00:23:54.758 CC lib/nvme/nvme_ctrlr_cmd.o 00:23:54.758 CC lib/nvme/nvme_ctrlr.o 00:23:54.758 CC lib/nvme/nvme_fabric.o 00:23:54.758 CC lib/nvme/nvme_ns_cmd.o 00:23:54.758 CC lib/nvme/nvme_qpair.o 00:23:54.758 CC lib/nvme/nvme_ns.o 00:23:54.758 CC lib/nvme/nvme_pcie_common.o 00:23:54.758 CC lib/nvme/nvme_pcie.o 00:23:54.758 CC lib/nvme/nvme.o 00:23:55.325 CC lib/nvme/nvme_quirks.o 00:23:55.325 CC lib/nvme/nvme_transport.o 00:23:55.325 CC lib/nvme/nvme_discovery.o 00:23:55.325 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:23:55.325 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:23:55.325 CC lib/nvme/nvme_tcp.o 00:23:55.583 CC lib/nvme/nvme_opal.o 00:23:55.583 CC lib/nvme/nvme_io_msg.o 00:23:55.842 CC lib/nvme/nvme_poll_group.o 00:23:55.842 LIB libspdk_thread.a 00:23:55.842 CC lib/nvme/nvme_zns.o 00:23:55.842 CC lib/nvme/nvme_cuse.o 00:23:55.842 SO libspdk_thread.so.9.0 00:23:55.842 CC lib/nvme/nvme_vfio_user.o 00:23:56.112 SYMLINK libspdk_thread.so 00:23:56.112 CC lib/nvme/nvme_rdma.o 00:23:56.112 CC lib/accel/accel.o 00:23:56.112 CC lib/blob/blobstore.o 00:23:56.112 CC lib/blob/request.o 00:23:56.388 CC lib/blob/zeroes.o 00:23:56.646 CC lib/blob/blob_bs_dev.o 00:23:56.646 CC lib/accel/accel_rpc.o 00:23:56.646 CC lib/accel/accel_sw.o 00:23:56.646 CC lib/init/json_config.o 00:23:56.646 CC lib/init/subsystem.o 00:23:56.646 CC lib/init/subsystem_rpc.o 00:23:56.646 CC lib/virtio/virtio.o 00:23:56.905 CC lib/vfu_tgt/tgt_endpoint.o 00:23:56.905 CC lib/virtio/virtio_vhost_user.o 00:23:56.905 CC lib/init/rpc.o 00:23:56.905 CC lib/vfu_tgt/tgt_rpc.o 00:23:56.905 CC lib/virtio/virtio_vfio_user.o 00:23:56.905 CC lib/virtio/virtio_pci.o 00:23:56.905 LIB libspdk_init.a 00:23:57.164 LIB libspdk_accel.a 00:23:57.164 SO libspdk_init.so.4.0 00:23:57.164 SO libspdk_accel.so.14.0 00:23:57.164 LIB libspdk_vfu_tgt.a 00:23:57.164 SYMLINK libspdk_init.so 00:23:57.164 SO libspdk_vfu_tgt.so.2.0 00:23:57.164 SYMLINK libspdk_accel.so 00:23:57.164 SYMLINK libspdk_vfu_tgt.so 00:23:57.164 CC lib/event/app.o 00:23:57.164 CC lib/event/log_rpc.o 00:23:57.164 CC lib/event/reactor.o 00:23:57.164 LIB libspdk_virtio.a 00:23:57.164 CC lib/event/app_rpc.o 00:23:57.164 CC lib/event/scheduler_static.o 00:23:57.164 LIB libspdk_nvme.a 00:23:57.422 SO libspdk_virtio.so.6.0 00:23:57.422 CC lib/bdev/bdev.o 00:23:57.422 CC lib/bdev/bdev_rpc.o 00:23:57.422 SYMLINK libspdk_virtio.so 00:23:57.422 CC lib/bdev/bdev_zone.o 00:23:57.422 CC lib/bdev/part.o 00:23:57.422 CC lib/bdev/scsi_nvme.o 00:23:57.422 SO libspdk_nvme.so.12.0 00:23:57.681 LIB libspdk_event.a 00:23:57.681 SO libspdk_event.so.12.0 00:23:57.681 SYMLINK libspdk_nvme.so 00:23:57.940 SYMLINK libspdk_event.so 00:23:58.876 LIB libspdk_blob.a 00:23:58.876 SO libspdk_blob.so.10.1 00:23:58.876 SYMLINK libspdk_blob.so 00:23:59.196 CC lib/blobfs/blobfs.o 00:23:59.196 CC lib/blobfs/tree.o 00:23:59.196 CC lib/lvol/lvol.o 00:23:59.764 LIB libspdk_bdev.a 00:23:59.764 SO libspdk_bdev.so.14.0 00:23:59.764 SYMLINK libspdk_bdev.so 00:23:59.764 LIB libspdk_blobfs.a 00:23:59.764 SO libspdk_blobfs.so.9.0 00:24:00.023 LIB libspdk_lvol.a 00:24:00.023 SYMLINK libspdk_blobfs.so 00:24:00.023 CC lib/ublk/ublk.o 00:24:00.024 CC lib/ublk/ublk_rpc.o 00:24:00.024 CC lib/ftl/ftl_core.o 00:24:00.024 CC lib/ftl/ftl_init.o 00:24:00.024 CC lib/scsi/dev.o 00:24:00.024 CC lib/ftl/ftl_debug.o 00:24:00.024 CC lib/ftl/ftl_layout.o 00:24:00.024 SO libspdk_lvol.so.9.1 00:24:00.024 CC lib/nvmf/ctrlr.o 00:24:00.024 CC lib/nbd/nbd.o 00:24:00.024 SYMLINK libspdk_lvol.so 00:24:00.024 CC lib/nvmf/ctrlr_discovery.o 00:24:00.024 CC lib/nvmf/ctrlr_bdev.o 00:24:00.024 CC lib/ftl/ftl_io.o 00:24:00.282 CC lib/ftl/ftl_sb.o 00:24:00.282 CC lib/scsi/lun.o 00:24:00.282 CC lib/ftl/ftl_l2p.o 00:24:00.282 CC lib/nvmf/subsystem.o 00:24:00.282 CC lib/nbd/nbd_rpc.o 00:24:00.282 CC lib/nvmf/nvmf.o 00:24:00.282 CC lib/ftl/ftl_l2p_flat.o 00:24:00.541 CC lib/scsi/port.o 00:24:00.541 CC lib/scsi/scsi.o 00:24:00.541 LIB libspdk_nbd.a 00:24:00.541 CC lib/ftl/ftl_nv_cache.o 00:24:00.541 SO libspdk_nbd.so.6.0 00:24:00.541 LIB libspdk_ublk.a 00:24:00.541 SO libspdk_ublk.so.2.0 00:24:00.541 SYMLINK libspdk_nbd.so 00:24:00.541 CC lib/scsi/scsi_bdev.o 00:24:00.541 CC lib/scsi/scsi_pr.o 00:24:00.541 CC lib/scsi/scsi_rpc.o 00:24:00.541 SYMLINK libspdk_ublk.so 00:24:00.541 CC lib/scsi/task.o 00:24:00.541 CC lib/nvmf/nvmf_rpc.o 00:24:00.801 CC lib/nvmf/transport.o 00:24:00.801 CC lib/ftl/ftl_band.o 00:24:00.801 CC lib/ftl/ftl_band_ops.o 00:24:00.801 CC lib/ftl/ftl_writer.o 00:24:01.070 LIB libspdk_scsi.a 00:24:01.071 SO libspdk_scsi.so.8.0 00:24:01.071 CC lib/ftl/ftl_rq.o 00:24:01.071 CC lib/ftl/ftl_reloc.o 00:24:01.071 CC lib/nvmf/tcp.o 00:24:01.071 CC lib/nvmf/vfio_user.o 00:24:01.071 SYMLINK libspdk_scsi.so 00:24:01.071 CC lib/ftl/ftl_l2p_cache.o 00:24:01.330 CC lib/ftl/ftl_p2l.o 00:24:01.330 CC lib/ftl/mngt/ftl_mngt.o 00:24:01.330 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:24:01.330 CC lib/nvmf/rdma.o 00:24:01.330 CC lib/iscsi/conn.o 00:24:01.330 CC lib/iscsi/init_grp.o 00:24:01.330 CC lib/iscsi/iscsi.o 00:24:01.589 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:24:01.589 CC lib/ftl/mngt/ftl_mngt_startup.o 00:24:01.589 CC lib/ftl/mngt/ftl_mngt_md.o 00:24:01.589 CC lib/ftl/mngt/ftl_mngt_misc.o 00:24:01.589 CC lib/iscsi/md5.o 00:24:01.589 CC lib/iscsi/param.o 00:24:01.849 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:24:01.849 CC lib/iscsi/portal_grp.o 00:24:01.849 CC lib/iscsi/tgt_node.o 00:24:01.850 CC lib/iscsi/iscsi_subsystem.o 00:24:01.850 CC lib/iscsi/iscsi_rpc.o 00:24:01.850 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:24:02.109 CC lib/iscsi/task.o 00:24:02.109 CC lib/ftl/mngt/ftl_mngt_band.o 00:24:02.109 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:24:02.109 CC lib/vhost/vhost.o 00:24:02.368 CC lib/vhost/vhost_rpc.o 00:24:02.368 CC lib/vhost/vhost_scsi.o 00:24:02.368 CC lib/vhost/vhost_blk.o 00:24:02.368 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:24:02.368 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:24:02.626 CC lib/vhost/rte_vhost_user.o 00:24:02.626 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:24:02.626 CC lib/ftl/utils/ftl_conf.o 00:24:02.626 LIB libspdk_iscsi.a 00:24:02.626 SO libspdk_iscsi.so.7.0 00:24:02.886 CC lib/ftl/utils/ftl_md.o 00:24:02.886 CC lib/ftl/utils/ftl_mempool.o 00:24:02.886 CC lib/ftl/utils/ftl_bitmap.o 00:24:02.886 CC lib/ftl/utils/ftl_property.o 00:24:02.886 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:24:02.886 SYMLINK libspdk_iscsi.so 00:24:02.886 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:24:02.886 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:24:03.145 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:24:03.145 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:24:03.145 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:24:03.145 CC lib/ftl/upgrade/ftl_sb_v3.o 00:24:03.145 CC lib/ftl/upgrade/ftl_sb_v5.o 00:24:03.145 CC lib/ftl/nvc/ftl_nvc_dev.o 00:24:03.145 LIB libspdk_nvmf.a 00:24:03.145 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:24:03.145 CC lib/ftl/base/ftl_base_dev.o 00:24:03.145 CC lib/ftl/base/ftl_base_bdev.o 00:24:03.145 CC lib/ftl/ftl_trace.o 00:24:03.405 SO libspdk_nvmf.so.17.0 00:24:03.405 LIB libspdk_ftl.a 00:24:03.405 SYMLINK libspdk_nvmf.so 00:24:03.405 LIB libspdk_vhost.a 00:24:03.664 SO libspdk_vhost.so.7.1 00:24:03.664 SO libspdk_ftl.so.8.0 00:24:03.664 SYMLINK libspdk_vhost.so 00:24:03.923 SYMLINK libspdk_ftl.so 00:24:04.182 CC module/env_dpdk/env_dpdk_rpc.o 00:24:04.182 CC module/vfu_device/vfu_virtio.o 00:24:04.182 CC module/accel/error/accel_error.o 00:24:04.182 CC module/blob/bdev/blob_bdev.o 00:24:04.182 CC module/accel/ioat/accel_ioat.o 00:24:04.182 CC module/accel/iaa/accel_iaa.o 00:24:04.182 CC module/accel/dsa/accel_dsa.o 00:24:04.182 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:24:04.182 CC module/scheduler/dynamic/scheduler_dynamic.o 00:24:04.182 CC module/sock/posix/posix.o 00:24:04.182 LIB libspdk_env_dpdk_rpc.a 00:24:04.182 SO libspdk_env_dpdk_rpc.so.5.0 00:24:04.442 LIB libspdk_scheduler_dpdk_governor.a 00:24:04.442 CC module/accel/error/accel_error_rpc.o 00:24:04.442 SYMLINK libspdk_env_dpdk_rpc.so 00:24:04.442 CC module/accel/ioat/accel_ioat_rpc.o 00:24:04.442 SO libspdk_scheduler_dpdk_governor.so.3.0 00:24:04.442 CC module/accel/iaa/accel_iaa_rpc.o 00:24:04.442 LIB libspdk_scheduler_dynamic.a 00:24:04.442 CC module/accel/dsa/accel_dsa_rpc.o 00:24:04.442 LIB libspdk_blob_bdev.a 00:24:04.442 SO libspdk_scheduler_dynamic.so.3.0 00:24:04.442 SYMLINK libspdk_scheduler_dpdk_governor.so 00:24:04.442 SO libspdk_blob_bdev.so.10.1 00:24:04.442 CC module/vfu_device/vfu_virtio_blk.o 00:24:04.442 CC module/scheduler/gscheduler/gscheduler.o 00:24:04.442 SYMLINK libspdk_scheduler_dynamic.so 00:24:04.442 CC module/vfu_device/vfu_virtio_scsi.o 00:24:04.442 LIB libspdk_accel_error.a 00:24:04.442 LIB libspdk_accel_ioat.a 00:24:04.442 SO libspdk_accel_error.so.1.0 00:24:04.442 LIB libspdk_accel_iaa.a 00:24:04.442 SYMLINK libspdk_blob_bdev.so 00:24:04.442 CC module/vfu_device/vfu_virtio_rpc.o 00:24:04.442 SO libspdk_accel_ioat.so.5.0 00:24:04.442 SO libspdk_accel_iaa.so.2.0 00:24:04.442 LIB libspdk_accel_dsa.a 00:24:04.701 SYMLINK libspdk_accel_error.so 00:24:04.701 SO libspdk_accel_dsa.so.4.0 00:24:04.701 SYMLINK libspdk_accel_ioat.so 00:24:04.701 LIB libspdk_scheduler_gscheduler.a 00:24:04.701 SYMLINK libspdk_accel_iaa.so 00:24:04.701 SO libspdk_scheduler_gscheduler.so.3.0 00:24:04.701 SYMLINK libspdk_accel_dsa.so 00:24:04.701 SYMLINK libspdk_scheduler_gscheduler.so 00:24:04.701 CC module/bdev/delay/vbdev_delay.o 00:24:04.701 CC module/bdev/error/vbdev_error.o 00:24:04.701 CC module/bdev/gpt/gpt.o 00:24:04.701 CC module/blobfs/bdev/blobfs_bdev.o 00:24:04.701 CC module/bdev/lvol/vbdev_lvol.o 00:24:04.701 CC module/bdev/nvme/bdev_nvme.o 00:24:04.701 CC module/bdev/null/bdev_null.o 00:24:04.701 CC module/bdev/malloc/bdev_malloc.o 00:24:04.959 LIB libspdk_vfu_device.a 00:24:04.959 LIB libspdk_sock_posix.a 00:24:04.959 SO libspdk_vfu_device.so.2.0 00:24:04.959 SO libspdk_sock_posix.so.5.0 00:24:04.959 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:24:04.959 CC module/bdev/gpt/vbdev_gpt.o 00:24:04.959 SYMLINK libspdk_vfu_device.so 00:24:04.959 CC module/bdev/null/bdev_null_rpc.o 00:24:04.959 CC module/bdev/error/vbdev_error_rpc.o 00:24:04.959 SYMLINK libspdk_sock_posix.so 00:24:04.959 CC module/bdev/delay/vbdev_delay_rpc.o 00:24:05.219 CC module/bdev/passthru/vbdev_passthru.o 00:24:05.219 LIB libspdk_blobfs_bdev.a 00:24:05.219 SO libspdk_blobfs_bdev.so.5.0 00:24:05.219 CC module/bdev/malloc/bdev_malloc_rpc.o 00:24:05.219 LIB libspdk_bdev_null.a 00:24:05.219 LIB libspdk_bdev_gpt.a 00:24:05.219 CC module/bdev/raid/bdev_raid.o 00:24:05.219 LIB libspdk_bdev_error.a 00:24:05.219 SO libspdk_bdev_null.so.5.0 00:24:05.219 SO libspdk_bdev_gpt.so.5.0 00:24:05.219 LIB libspdk_bdev_delay.a 00:24:05.219 SYMLINK libspdk_blobfs_bdev.so 00:24:05.219 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:24:05.219 CC module/bdev/raid/bdev_raid_rpc.o 00:24:05.219 SO libspdk_bdev_error.so.5.0 00:24:05.219 SYMLINK libspdk_bdev_null.so 00:24:05.219 SYMLINK libspdk_bdev_gpt.so 00:24:05.219 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:24:05.219 SO libspdk_bdev_delay.so.5.0 00:24:05.219 CC module/bdev/nvme/bdev_nvme_rpc.o 00:24:05.219 LIB libspdk_bdev_malloc.a 00:24:05.219 SYMLINK libspdk_bdev_error.so 00:24:05.219 CC module/bdev/nvme/nvme_rpc.o 00:24:05.219 SYMLINK libspdk_bdev_delay.so 00:24:05.219 SO libspdk_bdev_malloc.so.5.0 00:24:05.478 CC module/bdev/raid/bdev_raid_sb.o 00:24:05.478 CC module/bdev/split/vbdev_split.o 00:24:05.478 SYMLINK libspdk_bdev_malloc.so 00:24:05.478 CC module/bdev/raid/raid0.o 00:24:05.478 CC module/bdev/raid/raid1.o 00:24:05.478 LIB libspdk_bdev_passthru.a 00:24:05.478 SO libspdk_bdev_passthru.so.5.0 00:24:05.478 LIB libspdk_bdev_lvol.a 00:24:05.478 SO libspdk_bdev_lvol.so.5.0 00:24:05.478 SYMLINK libspdk_bdev_passthru.so 00:24:05.478 CC module/bdev/raid/concat.o 00:24:05.478 SYMLINK libspdk_bdev_lvol.so 00:24:05.478 CC module/bdev/split/vbdev_split_rpc.o 00:24:05.736 CC module/bdev/nvme/bdev_mdns_client.o 00:24:05.736 CC module/bdev/zone_block/vbdev_zone_block.o 00:24:05.736 CC module/bdev/nvme/vbdev_opal.o 00:24:05.736 CC module/bdev/aio/bdev_aio.o 00:24:05.736 CC module/bdev/ftl/bdev_ftl.o 00:24:05.736 LIB libspdk_bdev_split.a 00:24:05.736 CC module/bdev/aio/bdev_aio_rpc.o 00:24:05.736 SO libspdk_bdev_split.so.5.0 00:24:05.737 CC module/bdev/nvme/vbdev_opal_rpc.o 00:24:05.737 SYMLINK libspdk_bdev_split.so 00:24:05.737 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:24:05.995 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:24:05.995 CC module/bdev/ftl/bdev_ftl_rpc.o 00:24:05.995 LIB libspdk_bdev_raid.a 00:24:05.995 SO libspdk_bdev_raid.so.5.0 00:24:05.995 LIB libspdk_bdev_aio.a 00:24:05.995 CC module/bdev/iscsi/bdev_iscsi.o 00:24:05.995 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:24:05.995 SO libspdk_bdev_aio.so.5.0 00:24:05.995 LIB libspdk_bdev_zone_block.a 00:24:05.995 SYMLINK libspdk_bdev_raid.so 00:24:05.995 CC module/bdev/virtio/bdev_virtio_scsi.o 00:24:05.995 CC module/bdev/virtio/bdev_virtio_blk.o 00:24:05.995 CC module/bdev/virtio/bdev_virtio_rpc.o 00:24:05.995 SO libspdk_bdev_zone_block.so.5.0 00:24:05.996 SYMLINK libspdk_bdev_aio.so 00:24:05.996 LIB libspdk_bdev_ftl.a 00:24:06.313 SO libspdk_bdev_ftl.so.5.0 00:24:06.313 SYMLINK libspdk_bdev_zone_block.so 00:24:06.313 SYMLINK libspdk_bdev_ftl.so 00:24:06.313 LIB libspdk_bdev_iscsi.a 00:24:06.313 SO libspdk_bdev_iscsi.so.5.0 00:24:06.586 SYMLINK libspdk_bdev_iscsi.so 00:24:06.586 LIB libspdk_bdev_virtio.a 00:24:06.586 SO libspdk_bdev_virtio.so.5.0 00:24:06.586 SYMLINK libspdk_bdev_virtio.so 00:24:06.845 LIB libspdk_bdev_nvme.a 00:24:06.845 SO libspdk_bdev_nvme.so.6.0 00:24:06.845 SYMLINK libspdk_bdev_nvme.so 00:24:07.412 CC module/event/subsystems/sock/sock.o 00:24:07.412 CC module/event/subsystems/iobuf/iobuf.o 00:24:07.412 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:24:07.412 CC module/event/subsystems/scheduler/scheduler.o 00:24:07.412 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:24:07.412 CC module/event/subsystems/vmd/vmd_rpc.o 00:24:07.412 CC module/event/subsystems/vmd/vmd.o 00:24:07.412 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:24:07.412 LIB libspdk_event_sock.a 00:24:07.412 LIB libspdk_event_iobuf.a 00:24:07.412 LIB libspdk_event_vhost_blk.a 00:24:07.412 LIB libspdk_event_scheduler.a 00:24:07.412 LIB libspdk_event_vfu_tgt.a 00:24:07.412 SO libspdk_event_sock.so.4.0 00:24:07.412 LIB libspdk_event_vmd.a 00:24:07.412 SO libspdk_event_iobuf.so.2.0 00:24:07.412 SO libspdk_event_vhost_blk.so.2.0 00:24:07.412 SO libspdk_event_scheduler.so.3.0 00:24:07.412 SO libspdk_event_vfu_tgt.so.2.0 00:24:07.412 SO libspdk_event_vmd.so.5.0 00:24:07.412 SYMLINK libspdk_event_sock.so 00:24:07.412 SYMLINK libspdk_event_vhost_blk.so 00:24:07.671 SYMLINK libspdk_event_scheduler.so 00:24:07.671 SYMLINK libspdk_event_iobuf.so 00:24:07.671 SYMLINK libspdk_event_vfu_tgt.so 00:24:07.671 SYMLINK libspdk_event_vmd.so 00:24:07.671 CC module/event/subsystems/accel/accel.o 00:24:07.929 LIB libspdk_event_accel.a 00:24:07.929 SO libspdk_event_accel.so.5.0 00:24:07.929 SYMLINK libspdk_event_accel.so 00:24:08.188 CC module/event/subsystems/bdev/bdev.o 00:24:08.447 LIB libspdk_event_bdev.a 00:24:08.447 SO libspdk_event_bdev.so.5.0 00:24:08.447 SYMLINK libspdk_event_bdev.so 00:24:08.706 CC module/event/subsystems/ublk/ublk.o 00:24:08.706 CC module/event/subsystems/nbd/nbd.o 00:24:08.706 CC module/event/subsystems/scsi/scsi.o 00:24:08.706 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:24:08.706 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:24:08.965 LIB libspdk_event_nbd.a 00:24:08.965 LIB libspdk_event_ublk.a 00:24:08.965 LIB libspdk_event_scsi.a 00:24:08.965 SO libspdk_event_nbd.so.5.0 00:24:08.965 SO libspdk_event_ublk.so.2.0 00:24:08.965 SO libspdk_event_scsi.so.5.0 00:24:08.965 LIB libspdk_event_nvmf.a 00:24:08.965 SYMLINK libspdk_event_nbd.so 00:24:08.965 SYMLINK libspdk_event_ublk.so 00:24:08.965 SYMLINK libspdk_event_scsi.so 00:24:08.965 SO libspdk_event_nvmf.so.5.0 00:24:09.224 SYMLINK libspdk_event_nvmf.so 00:24:09.224 CC module/event/subsystems/iscsi/iscsi.o 00:24:09.224 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:24:09.483 LIB libspdk_event_vhost_scsi.a 00:24:09.483 LIB libspdk_event_iscsi.a 00:24:09.483 SO libspdk_event_vhost_scsi.so.2.0 00:24:09.483 SO libspdk_event_iscsi.so.5.0 00:24:09.483 SYMLINK libspdk_event_vhost_scsi.so 00:24:09.483 SYMLINK libspdk_event_iscsi.so 00:24:09.741 SO libspdk.so.5.0 00:24:09.741 SYMLINK libspdk.so 00:24:10.000 CC app/trace_record/trace_record.o 00:24:10.000 TEST_HEADER include/spdk/accel.h 00:24:10.000 TEST_HEADER include/spdk/accel_module.h 00:24:10.000 TEST_HEADER include/spdk/assert.h 00:24:10.000 CXX app/trace/trace.o 00:24:10.000 TEST_HEADER include/spdk/barrier.h 00:24:10.000 TEST_HEADER include/spdk/base64.h 00:24:10.000 TEST_HEADER include/spdk/bdev.h 00:24:10.000 TEST_HEADER include/spdk/bdev_module.h 00:24:10.000 TEST_HEADER include/spdk/bdev_zone.h 00:24:10.000 TEST_HEADER include/spdk/bit_array.h 00:24:10.000 TEST_HEADER include/spdk/bit_pool.h 00:24:10.000 TEST_HEADER include/spdk/blob_bdev.h 00:24:10.000 TEST_HEADER include/spdk/blobfs_bdev.h 00:24:10.000 TEST_HEADER include/spdk/blobfs.h 00:24:10.000 TEST_HEADER include/spdk/blob.h 00:24:10.000 TEST_HEADER include/spdk/conf.h 00:24:10.000 TEST_HEADER include/spdk/config.h 00:24:10.000 TEST_HEADER include/spdk/cpuset.h 00:24:10.000 TEST_HEADER include/spdk/crc16.h 00:24:10.000 TEST_HEADER include/spdk/crc32.h 00:24:10.000 TEST_HEADER include/spdk/crc64.h 00:24:10.000 TEST_HEADER include/spdk/dif.h 00:24:10.000 TEST_HEADER include/spdk/dma.h 00:24:10.000 TEST_HEADER include/spdk/endian.h 00:24:10.000 TEST_HEADER include/spdk/env_dpdk.h 00:24:10.000 TEST_HEADER include/spdk/env.h 00:24:10.000 TEST_HEADER include/spdk/event.h 00:24:10.000 CC app/nvmf_tgt/nvmf_main.o 00:24:10.000 TEST_HEADER include/spdk/fd_group.h 00:24:10.000 TEST_HEADER include/spdk/fd.h 00:24:10.000 TEST_HEADER include/spdk/file.h 00:24:10.000 TEST_HEADER include/spdk/ftl.h 00:24:10.000 TEST_HEADER include/spdk/gpt_spec.h 00:24:10.000 TEST_HEADER include/spdk/hexlify.h 00:24:10.000 TEST_HEADER include/spdk/histogram_data.h 00:24:10.000 CC examples/accel/perf/accel_perf.o 00:24:10.000 TEST_HEADER include/spdk/idxd.h 00:24:10.000 TEST_HEADER include/spdk/idxd_spec.h 00:24:10.000 TEST_HEADER include/spdk/init.h 00:24:10.000 TEST_HEADER include/spdk/ioat.h 00:24:10.000 TEST_HEADER include/spdk/ioat_spec.h 00:24:10.000 TEST_HEADER include/spdk/iscsi_spec.h 00:24:10.000 TEST_HEADER include/spdk/json.h 00:24:10.000 TEST_HEADER include/spdk/jsonrpc.h 00:24:10.000 TEST_HEADER include/spdk/likely.h 00:24:10.000 TEST_HEADER include/spdk/log.h 00:24:10.000 CC examples/bdev/hello_world/hello_bdev.o 00:24:10.000 TEST_HEADER include/spdk/lvol.h 00:24:10.000 CC test/app/bdev_svc/bdev_svc.o 00:24:10.000 TEST_HEADER include/spdk/memory.h 00:24:10.000 CC test/blobfs/mkfs/mkfs.o 00:24:10.000 TEST_HEADER include/spdk/mmio.h 00:24:10.000 CC test/bdev/bdevio/bdevio.o 00:24:10.000 TEST_HEADER include/spdk/nbd.h 00:24:10.000 TEST_HEADER include/spdk/notify.h 00:24:10.000 TEST_HEADER include/spdk/nvme.h 00:24:10.000 TEST_HEADER include/spdk/nvme_intel.h 00:24:10.000 CC test/accel/dif/dif.o 00:24:10.000 TEST_HEADER include/spdk/nvme_ocssd.h 00:24:10.000 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:24:10.000 TEST_HEADER include/spdk/nvme_spec.h 00:24:10.000 TEST_HEADER include/spdk/nvme_zns.h 00:24:10.000 TEST_HEADER include/spdk/nvmf_cmd.h 00:24:10.000 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:24:10.000 TEST_HEADER include/spdk/nvmf.h 00:24:10.000 TEST_HEADER include/spdk/nvmf_spec.h 00:24:10.000 TEST_HEADER include/spdk/nvmf_transport.h 00:24:10.000 TEST_HEADER include/spdk/opal.h 00:24:10.000 TEST_HEADER include/spdk/opal_spec.h 00:24:10.000 TEST_HEADER include/spdk/pci_ids.h 00:24:10.000 TEST_HEADER include/spdk/pipe.h 00:24:10.000 TEST_HEADER include/spdk/queue.h 00:24:10.000 TEST_HEADER include/spdk/reduce.h 00:24:10.000 TEST_HEADER include/spdk/rpc.h 00:24:10.000 TEST_HEADER include/spdk/scheduler.h 00:24:10.000 TEST_HEADER include/spdk/scsi.h 00:24:10.000 TEST_HEADER include/spdk/scsi_spec.h 00:24:10.000 TEST_HEADER include/spdk/sock.h 00:24:10.000 TEST_HEADER include/spdk/stdinc.h 00:24:10.000 TEST_HEADER include/spdk/string.h 00:24:10.000 TEST_HEADER include/spdk/thread.h 00:24:10.000 TEST_HEADER include/spdk/trace.h 00:24:10.000 TEST_HEADER include/spdk/trace_parser.h 00:24:10.000 TEST_HEADER include/spdk/tree.h 00:24:10.000 TEST_HEADER include/spdk/ublk.h 00:24:10.000 TEST_HEADER include/spdk/util.h 00:24:10.000 TEST_HEADER include/spdk/uuid.h 00:24:10.000 TEST_HEADER include/spdk/version.h 00:24:10.000 TEST_HEADER include/spdk/vfio_user_pci.h 00:24:10.000 TEST_HEADER include/spdk/vfio_user_spec.h 00:24:10.000 TEST_HEADER include/spdk/vhost.h 00:24:10.000 TEST_HEADER include/spdk/vmd.h 00:24:10.000 TEST_HEADER include/spdk/xor.h 00:24:10.000 TEST_HEADER include/spdk/zipf.h 00:24:10.000 CXX test/cpp_headers/accel.o 00:24:10.000 LINK nvmf_tgt 00:24:10.000 LINK spdk_trace_record 00:24:10.259 LINK bdev_svc 00:24:10.259 LINK mkfs 00:24:10.259 CXX test/cpp_headers/accel_module.o 00:24:10.259 LINK hello_bdev 00:24:10.259 LINK spdk_trace 00:24:10.259 LINK dif 00:24:10.259 LINK bdevio 00:24:10.259 LINK accel_perf 00:24:10.259 CXX test/cpp_headers/assert.o 00:24:10.520 CC test/dma/test_dma/test_dma.o 00:24:10.520 CC test/env/vtophys/vtophys.o 00:24:10.521 CC test/env/mem_callbacks/mem_callbacks.o 00:24:10.521 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:24:10.521 CXX test/cpp_headers/barrier.o 00:24:10.521 CC examples/bdev/bdevperf/bdevperf.o 00:24:10.521 CC app/iscsi_tgt/iscsi_tgt.o 00:24:10.521 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:24:10.521 LINK vtophys 00:24:10.521 CC test/event/event_perf/event_perf.o 00:24:10.521 CXX test/cpp_headers/base64.o 00:24:10.521 CC app/spdk_tgt/spdk_tgt.o 00:24:10.779 LINK event_perf 00:24:10.779 LINK iscsi_tgt 00:24:10.779 LINK env_dpdk_post_init 00:24:10.779 CXX test/cpp_headers/bdev.o 00:24:10.779 LINK test_dma 00:24:10.779 LINK nvme_fuzz 00:24:10.779 LINK spdk_tgt 00:24:10.780 CC test/env/memory/memory_ut.o 00:24:11.039 CXX test/cpp_headers/bdev_module.o 00:24:11.039 CC test/event/reactor/reactor.o 00:24:11.039 CC test/nvme/aer/aer.o 00:24:11.039 CC test/lvol/esnap/esnap.o 00:24:11.039 LINK mem_callbacks 00:24:11.039 CC test/nvme/reset/reset.o 00:24:11.039 LINK reactor 00:24:11.039 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:24:11.039 CXX test/cpp_headers/bdev_zone.o 00:24:11.039 CC app/spdk_lspci/spdk_lspci.o 00:24:11.297 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:24:11.297 LINK bdevperf 00:24:11.297 LINK spdk_lspci 00:24:11.297 CXX test/cpp_headers/bit_array.o 00:24:11.297 CC test/event/reactor_perf/reactor_perf.o 00:24:11.297 LINK reset 00:24:11.297 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:24:11.297 LINK aer 00:24:11.555 CXX test/cpp_headers/bit_pool.o 00:24:11.555 LINK reactor_perf 00:24:11.555 CC app/spdk_nvme_perf/perf.o 00:24:11.555 CC app/spdk_nvme_identify/identify.o 00:24:11.555 CXX test/cpp_headers/blob_bdev.o 00:24:11.555 CC examples/blob/hello_world/hello_blob.o 00:24:11.555 CC test/event/app_repeat/app_repeat.o 00:24:11.813 CC test/nvme/sgl/sgl.o 00:24:11.813 LINK memory_ut 00:24:11.813 LINK vhost_fuzz 00:24:11.813 CXX test/cpp_headers/blobfs_bdev.o 00:24:11.813 LINK app_repeat 00:24:11.813 LINK hello_blob 00:24:11.813 LINK sgl 00:24:12.071 CXX test/cpp_headers/blobfs.o 00:24:12.071 CC test/env/pci/pci_ut.o 00:24:12.071 CC examples/blob/cli/blobcli.o 00:24:12.071 CC test/event/scheduler/scheduler.o 00:24:12.071 CXX test/cpp_headers/blob.o 00:24:12.071 CC app/spdk_nvme_discover/discovery_aer.o 00:24:12.071 CC test/nvme/e2edp/nvme_dp.o 00:24:12.330 LINK spdk_nvme_perf 00:24:12.330 LINK scheduler 00:24:12.330 LINK spdk_nvme_discover 00:24:12.330 CXX test/cpp_headers/conf.o 00:24:12.330 LINK spdk_nvme_identify 00:24:12.330 LINK pci_ut 00:24:12.330 LINK nvme_dp 00:24:12.330 CC test/nvme/overhead/overhead.o 00:24:12.330 LINK blobcli 00:24:12.330 CXX test/cpp_headers/config.o 00:24:12.589 CXX test/cpp_headers/cpuset.o 00:24:12.589 CC test/nvme/err_injection/err_injection.o 00:24:12.589 CC app/spdk_top/spdk_top.o 00:24:12.589 CXX test/cpp_headers/crc16.o 00:24:12.589 CXX test/cpp_headers/crc32.o 00:24:12.589 CC app/vhost/vhost.o 00:24:12.589 LINK iscsi_fuzz 00:24:12.589 LINK overhead 00:24:12.589 CC app/spdk_dd/spdk_dd.o 00:24:12.589 LINK err_injection 00:24:12.848 CXX test/cpp_headers/crc64.o 00:24:12.848 CC examples/ioat/perf/perf.o 00:24:12.848 LINK vhost 00:24:12.848 CC examples/ioat/verify/verify.o 00:24:12.848 CC test/app/histogram_perf/histogram_perf.o 00:24:12.848 CC test/nvme/startup/startup.o 00:24:12.848 CXX test/cpp_headers/dif.o 00:24:12.848 CC app/fio/nvme/fio_plugin.o 00:24:13.107 LINK ioat_perf 00:24:13.107 LINK spdk_dd 00:24:13.107 LINK verify 00:24:13.107 LINK histogram_perf 00:24:13.107 CXX test/cpp_headers/dma.o 00:24:13.107 LINK startup 00:24:13.107 CC test/nvme/reserve/reserve.o 00:24:13.107 CC test/nvme/simple_copy/simple_copy.o 00:24:13.107 CXX test/cpp_headers/endian.o 00:24:13.365 CC test/app/jsoncat/jsoncat.o 00:24:13.365 LINK spdk_top 00:24:13.365 CC examples/nvme/hello_world/hello_world.o 00:24:13.365 CC examples/nvme/reconnect/reconnect.o 00:24:13.365 LINK reserve 00:24:13.365 CC app/fio/bdev/fio_plugin.o 00:24:13.365 LINK jsoncat 00:24:13.365 CXX test/cpp_headers/env_dpdk.o 00:24:13.365 LINK spdk_nvme 00:24:13.365 LINK simple_copy 00:24:13.365 CC examples/nvme/nvme_manage/nvme_manage.o 00:24:13.624 LINK hello_world 00:24:13.624 CC test/nvme/connect_stress/connect_stress.o 00:24:13.624 CXX test/cpp_headers/env.o 00:24:13.624 CC test/nvme/boot_partition/boot_partition.o 00:24:13.624 CC test/app/stub/stub.o 00:24:13.624 CC test/nvme/compliance/nvme_compliance.o 00:24:13.624 LINK reconnect 00:24:13.624 LINK connect_stress 00:24:13.624 CXX test/cpp_headers/event.o 00:24:13.624 LINK boot_partition 00:24:13.624 CC test/nvme/fused_ordering/fused_ordering.o 00:24:13.624 LINK stub 00:24:13.883 LINK spdk_bdev 00:24:13.883 CXX test/cpp_headers/fd_group.o 00:24:13.883 CXX test/cpp_headers/fd.o 00:24:13.883 CXX test/cpp_headers/file.o 00:24:13.883 LINK fused_ordering 00:24:13.883 CXX test/cpp_headers/ftl.o 00:24:13.883 CXX test/cpp_headers/gpt_spec.o 00:24:13.883 LINK nvme_manage 00:24:13.883 LINK nvme_compliance 00:24:13.883 CC examples/sock/hello_world/hello_sock.o 00:24:14.142 CC examples/nvme/arbitration/arbitration.o 00:24:14.142 CC test/nvme/doorbell_aers/doorbell_aers.o 00:24:14.142 CXX test/cpp_headers/hexlify.o 00:24:14.142 CC test/nvme/fdp/fdp.o 00:24:14.142 CXX test/cpp_headers/histogram_data.o 00:24:14.142 CC test/nvme/cuse/cuse.o 00:24:14.142 CC examples/nvme/hotplug/hotplug.o 00:24:14.142 CC examples/nvme/cmb_copy/cmb_copy.o 00:24:14.142 LINK hello_sock 00:24:14.142 CXX test/cpp_headers/idxd.o 00:24:14.142 LINK doorbell_aers 00:24:14.453 CC examples/nvme/abort/abort.o 00:24:14.453 LINK arbitration 00:24:14.453 LINK cmb_copy 00:24:14.453 LINK hotplug 00:24:14.453 LINK fdp 00:24:14.453 CXX test/cpp_headers/idxd_spec.o 00:24:14.453 CC examples/vmd/lsvmd/lsvmd.o 00:24:14.453 CC examples/vmd/led/led.o 00:24:14.453 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:24:14.453 CXX test/cpp_headers/init.o 00:24:14.731 CC test/rpc_client/rpc_client_test.o 00:24:14.731 LINK lsvmd 00:24:14.731 LINK led 00:24:14.731 CC test/thread/poller_perf/poller_perf.o 00:24:14.731 LINK abort 00:24:14.731 CC examples/nvmf/nvmf/nvmf.o 00:24:14.731 CXX test/cpp_headers/ioat.o 00:24:14.731 LINK pmr_persistence 00:24:14.731 LINK rpc_client_test 00:24:14.731 LINK poller_perf 00:24:14.731 CC examples/util/zipf/zipf.o 00:24:14.988 CXX test/cpp_headers/ioat_spec.o 00:24:14.989 CC examples/thread/thread/thread_ex.o 00:24:14.989 CXX test/cpp_headers/iscsi_spec.o 00:24:14.989 CC examples/idxd/perf/perf.o 00:24:14.989 CXX test/cpp_headers/json.o 00:24:14.989 LINK nvmf 00:24:14.989 LINK zipf 00:24:14.989 CXX test/cpp_headers/jsonrpc.o 00:24:14.989 LINK cuse 00:24:14.989 CXX test/cpp_headers/likely.o 00:24:15.246 LINK thread 00:24:15.246 CXX test/cpp_headers/log.o 00:24:15.246 CC examples/interrupt_tgt/interrupt_tgt.o 00:24:15.246 LINK esnap 00:24:15.246 CXX test/cpp_headers/lvol.o 00:24:15.246 CXX test/cpp_headers/memory.o 00:24:15.246 LINK idxd_perf 00:24:15.246 CXX test/cpp_headers/mmio.o 00:24:15.246 CXX test/cpp_headers/nbd.o 00:24:15.246 CXX test/cpp_headers/notify.o 00:24:15.246 CXX test/cpp_headers/nvme.o 00:24:15.246 LINK interrupt_tgt 00:24:15.246 CXX test/cpp_headers/nvme_intel.o 00:24:15.246 CXX test/cpp_headers/nvme_ocssd.o 00:24:15.246 CXX test/cpp_headers/nvme_ocssd_spec.o 00:24:15.246 CXX test/cpp_headers/nvme_spec.o 00:24:15.506 CXX test/cpp_headers/nvme_zns.o 00:24:15.506 CXX test/cpp_headers/nvmf_cmd.o 00:24:15.506 CXX test/cpp_headers/nvmf_fc_spec.o 00:24:15.506 CXX test/cpp_headers/nvmf.o 00:24:15.506 CXX test/cpp_headers/nvmf_spec.o 00:24:15.506 CXX test/cpp_headers/nvmf_transport.o 00:24:15.506 CXX test/cpp_headers/opal.o 00:24:15.506 CXX test/cpp_headers/opal_spec.o 00:24:15.506 CXX test/cpp_headers/pci_ids.o 00:24:15.506 CXX test/cpp_headers/pipe.o 00:24:15.765 CXX test/cpp_headers/queue.o 00:24:15.765 CXX test/cpp_headers/reduce.o 00:24:15.765 CXX test/cpp_headers/rpc.o 00:24:15.765 CXX test/cpp_headers/scheduler.o 00:24:15.765 CXX test/cpp_headers/scsi.o 00:24:15.765 CXX test/cpp_headers/scsi_spec.o 00:24:15.765 CXX test/cpp_headers/sock.o 00:24:15.765 CXX test/cpp_headers/stdinc.o 00:24:15.765 CXX test/cpp_headers/string.o 00:24:15.765 CXX test/cpp_headers/thread.o 00:24:15.765 CXX test/cpp_headers/trace.o 00:24:15.765 CXX test/cpp_headers/trace_parser.o 00:24:15.765 CXX test/cpp_headers/tree.o 00:24:15.765 CXX test/cpp_headers/ublk.o 00:24:16.023 CXX test/cpp_headers/util.o 00:24:16.023 CXX test/cpp_headers/uuid.o 00:24:16.023 CXX test/cpp_headers/version.o 00:24:16.023 CXX test/cpp_headers/vfio_user_pci.o 00:24:16.023 CXX test/cpp_headers/vfio_user_spec.o 00:24:16.023 CXX test/cpp_headers/vhost.o 00:24:16.023 CXX test/cpp_headers/vmd.o 00:24:16.023 CXX test/cpp_headers/xor.o 00:24:16.023 CXX test/cpp_headers/zipf.o 00:24:21.294 00:24:21.294 real 1m1.600s 00:24:21.294 user 6m14.970s 00:24:21.294 sys 1m27.390s 00:24:21.294 08:20:54 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:24:21.294 08:20:54 -- common/autotest_common.sh@10 -- $ set +x 00:24:21.294 ************************************ 00:24:21.294 END TEST make 00:24:21.294 ************************************ 00:24:21.553 08:20:54 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:21.553 08:20:54 -- nvmf/common.sh@7 -- # uname -s 00:24:21.553 08:20:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:21.553 08:20:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:21.553 08:20:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:21.553 08:20:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:21.553 08:20:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:21.553 08:20:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:21.553 08:20:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:21.553 08:20:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:21.553 08:20:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:21.553 08:20:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:21.553 08:20:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:24:21.553 08:20:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:24:21.553 08:20:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:21.553 08:20:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:21.553 08:20:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:21.553 08:20:54 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:21.554 08:20:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:21.554 08:20:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:21.554 08:20:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:21.554 08:20:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.554 08:20:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.554 08:20:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.554 08:20:54 -- paths/export.sh@5 -- # export PATH 00:24:21.554 08:20:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.554 08:20:54 -- nvmf/common.sh@46 -- # : 0 00:24:21.554 08:20:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:21.554 08:20:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:21.554 08:20:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:21.554 08:20:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:21.554 08:20:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:21.554 08:20:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:21.554 08:20:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:21.554 08:20:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:21.554 08:20:54 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:24:21.554 08:20:54 -- spdk/autotest.sh@32 -- # uname -s 00:24:21.554 08:20:54 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:24:21.554 08:20:54 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:24:21.554 08:20:54 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:24:21.554 08:20:54 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:24:21.554 08:20:54 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:24:21.554 08:20:54 -- spdk/autotest.sh@44 -- # modprobe nbd 00:24:21.554 08:20:54 -- spdk/autotest.sh@46 -- # type -P udevadm 00:24:21.554 08:20:54 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:24:21.554 08:20:54 -- spdk/autotest.sh@48 -- # udevadm_pid=49800 00:24:21.554 08:20:54 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:24:21.554 08:20:54 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:24:21.554 08:20:54 -- spdk/autotest.sh@54 -- # echo 49815 00:24:21.554 08:20:54 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:24:21.554 08:20:54 -- spdk/autotest.sh@56 -- # echo 49818 00:24:21.554 08:20:54 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:24:21.554 08:20:54 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:24:21.554 08:20:54 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:24:21.554 08:20:54 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:24:21.554 08:20:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:21.554 08:20:54 -- common/autotest_common.sh@10 -- # set +x 00:24:21.554 08:20:54 -- spdk/autotest.sh@70 -- # create_test_list 00:24:21.554 08:20:54 -- common/autotest_common.sh@736 -- # xtrace_disable 00:24:21.554 08:20:54 -- common/autotest_common.sh@10 -- # set +x 00:24:21.814 08:20:54 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:24:21.814 08:20:54 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:24:21.814 08:20:54 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:24:21.814 08:20:54 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:24:21.814 08:20:54 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:24:21.814 08:20:54 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:24:21.814 08:20:54 -- common/autotest_common.sh@1440 -- # uname 00:24:21.814 08:20:54 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:24:21.814 08:20:54 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:24:21.814 08:20:54 -- common/autotest_common.sh@1460 -- # uname 00:24:21.814 08:20:54 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:24:21.814 08:20:54 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:24:21.814 08:20:54 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:24:21.814 08:20:54 -- spdk/autotest.sh@83 -- # hash lcov 00:24:21.814 08:20:54 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:24:21.814 08:20:54 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:24:21.814 --rc lcov_branch_coverage=1 00:24:21.814 --rc lcov_function_coverage=1 00:24:21.814 --rc genhtml_branch_coverage=1 00:24:21.814 --rc genhtml_function_coverage=1 00:24:21.814 --rc genhtml_legend=1 00:24:21.814 --rc geninfo_all_blocks=1 00:24:21.814 ' 00:24:21.814 08:20:54 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:24:21.814 --rc lcov_branch_coverage=1 00:24:21.814 --rc lcov_function_coverage=1 00:24:21.814 --rc genhtml_branch_coverage=1 00:24:21.814 --rc genhtml_function_coverage=1 00:24:21.814 --rc genhtml_legend=1 00:24:21.814 --rc geninfo_all_blocks=1 00:24:21.814 ' 00:24:21.814 08:20:54 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:24:21.814 --rc lcov_branch_coverage=1 00:24:21.814 --rc lcov_function_coverage=1 00:24:21.814 --rc genhtml_branch_coverage=1 00:24:21.814 --rc genhtml_function_coverage=1 00:24:21.814 --rc genhtml_legend=1 00:24:21.814 --rc geninfo_all_blocks=1 00:24:21.814 --no-external' 00:24:21.814 08:20:54 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:24:21.814 --rc lcov_branch_coverage=1 00:24:21.814 --rc lcov_function_coverage=1 00:24:21.814 --rc genhtml_branch_coverage=1 00:24:21.814 --rc genhtml_function_coverage=1 00:24:21.814 --rc genhtml_legend=1 00:24:21.814 --rc geninfo_all_blocks=1 00:24:21.814 --no-external' 00:24:21.814 08:20:54 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:24:21.814 lcov: LCOV version 1.14 00:24:21.814 08:20:54 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:24:29.950 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:24:29.950 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:24:29.950 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:24:29.950 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:24:29.950 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:24:29.950 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:24:48.053 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:24:48.053 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:24:48.053 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:24:48.053 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:24:48.053 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:24:48.053 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:24:48.053 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:24:48.053 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:24:48.053 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:24:48.053 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:24:48.053 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:24:48.053 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:24:48.053 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:24:48.053 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:24:48.053 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:24:48.053 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:24:48.053 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:24:48.053 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:24:48.053 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:24:48.053 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:24:48.053 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:24:48.053 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:24:48.053 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:24:48.053 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:24:48.053 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:24:48.053 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:24:48.053 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:24:48.053 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:24:48.053 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:24:48.053 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:24:48.053 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:24:48.053 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:24:48.053 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:24:48.053 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:24:48.053 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:24:48.053 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:24:48.053 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:24:48.053 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:24:48.053 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:24:48.053 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:24:48.053 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:24:48.053 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:24:48.053 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:24:48.053 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:24:48.053 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:24:48.053 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:24:48.053 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:24:48.054 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:24:48.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:24:50.588 08:21:23 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:24:50.588 08:21:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:50.588 08:21:23 -- common/autotest_common.sh@10 -- # set +x 00:24:50.588 08:21:23 -- spdk/autotest.sh@102 -- # rm -f 00:24:50.588 08:21:23 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:51.153 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:51.153 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:24:51.153 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:24:51.153 08:21:24 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:24:51.153 08:21:24 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:24:51.153 08:21:24 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:24:51.153 08:21:24 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:24:51.153 08:21:24 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:24:51.153 08:21:24 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:24:51.153 08:21:24 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:24:51.153 08:21:24 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:51.153 08:21:24 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:24:51.153 08:21:24 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:24:51.153 08:21:24 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:24:51.153 08:21:24 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:24:51.153 08:21:24 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:51.153 08:21:24 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:24:51.153 08:21:24 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:24:51.153 08:21:24 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:24:51.153 08:21:24 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:24:51.153 08:21:24 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:24:51.153 08:21:24 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:24:51.153 08:21:24 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:24:51.153 08:21:24 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:24:51.153 08:21:24 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:24:51.153 08:21:24 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:24:51.153 08:21:24 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:24:51.153 08:21:24 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:24:51.153 08:21:24 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:24:51.153 08:21:24 -- spdk/autotest.sh@121 -- # grep -v p 00:24:51.153 08:21:24 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:24:51.153 08:21:24 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:24:51.153 08:21:24 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:24:51.153 08:21:24 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:24:51.153 08:21:24 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:24:51.410 No valid GPT data, bailing 00:24:51.410 08:21:24 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:51.410 08:21:24 -- scripts/common.sh@393 -- # pt= 00:24:51.410 08:21:24 -- scripts/common.sh@394 -- # return 1 00:24:51.410 08:21:24 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:24:51.410 1+0 records in 00:24:51.410 1+0 records out 00:24:51.410 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00653593 s, 160 MB/s 00:24:51.410 08:21:24 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:24:51.410 08:21:24 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:24:51.410 08:21:24 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n1 00:24:51.410 08:21:24 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:24:51.410 08:21:24 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:24:51.410 No valid GPT data, bailing 00:24:51.410 08:21:24 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:24:51.410 08:21:24 -- scripts/common.sh@393 -- # pt= 00:24:51.410 08:21:24 -- scripts/common.sh@394 -- # return 1 00:24:51.410 08:21:24 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:24:51.410 1+0 records in 00:24:51.410 1+0 records out 00:24:51.410 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00463605 s, 226 MB/s 00:24:51.410 08:21:24 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:24:51.410 08:21:24 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:24:51.410 08:21:24 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n2 00:24:51.410 08:21:24 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:24:51.410 08:21:24 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:24:51.410 No valid GPT data, bailing 00:24:51.411 08:21:24 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:24:51.411 08:21:24 -- scripts/common.sh@393 -- # pt= 00:24:51.411 08:21:24 -- scripts/common.sh@394 -- # return 1 00:24:51.411 08:21:24 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:24:51.411 1+0 records in 00:24:51.411 1+0 records out 00:24:51.411 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00647875 s, 162 MB/s 00:24:51.411 08:21:24 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:24:51.411 08:21:24 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:24:51.411 08:21:24 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n3 00:24:51.411 08:21:24 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:24:51.411 08:21:24 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:24:51.411 No valid GPT data, bailing 00:24:51.411 08:21:24 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:24:51.670 08:21:24 -- scripts/common.sh@393 -- # pt= 00:24:51.670 08:21:24 -- scripts/common.sh@394 -- # return 1 00:24:51.670 08:21:24 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:24:51.670 1+0 records in 00:24:51.670 1+0 records out 00:24:51.670 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00623393 s, 168 MB/s 00:24:51.670 08:21:24 -- spdk/autotest.sh@129 -- # sync 00:24:51.670 08:21:24 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:24:51.670 08:21:24 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:24:51.670 08:21:24 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:24:54.206 08:21:27 -- spdk/autotest.sh@135 -- # uname -s 00:24:54.206 08:21:27 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:24:54.206 08:21:27 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:24:54.206 08:21:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:54.206 08:21:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:54.206 08:21:27 -- common/autotest_common.sh@10 -- # set +x 00:24:54.206 ************************************ 00:24:54.206 START TEST setup.sh 00:24:54.206 ************************************ 00:24:54.206 08:21:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:24:54.206 * Looking for test storage... 00:24:54.206 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:24:54.206 08:21:27 -- setup/test-setup.sh@10 -- # uname -s 00:24:54.206 08:21:27 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:24:54.206 08:21:27 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:24:54.206 08:21:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:54.206 08:21:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:54.206 08:21:27 -- common/autotest_common.sh@10 -- # set +x 00:24:54.206 ************************************ 00:24:54.206 START TEST acl 00:24:54.206 ************************************ 00:24:54.206 08:21:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:24:54.206 * Looking for test storage... 00:24:54.206 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:24:54.206 08:21:27 -- setup/acl.sh@10 -- # get_zoned_devs 00:24:54.206 08:21:27 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:24:54.206 08:21:27 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:24:54.206 08:21:27 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:24:54.206 08:21:27 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:24:54.206 08:21:27 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:24:54.206 08:21:27 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:24:54.206 08:21:27 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:54.206 08:21:27 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:24:54.206 08:21:27 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:24:54.206 08:21:27 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:24:54.206 08:21:27 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:24:54.206 08:21:27 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:54.206 08:21:27 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:24:54.206 08:21:27 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:24:54.206 08:21:27 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:24:54.206 08:21:27 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:24:54.206 08:21:27 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:24:54.207 08:21:27 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:24:54.207 08:21:27 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:24:54.207 08:21:27 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:24:54.207 08:21:27 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:24:54.207 08:21:27 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:24:54.207 08:21:27 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:24:54.207 08:21:27 -- setup/acl.sh@12 -- # devs=() 00:24:54.207 08:21:27 -- setup/acl.sh@12 -- # declare -a devs 00:24:54.207 08:21:27 -- setup/acl.sh@13 -- # drivers=() 00:24:54.207 08:21:27 -- setup/acl.sh@13 -- # declare -A drivers 00:24:54.207 08:21:27 -- setup/acl.sh@51 -- # setup reset 00:24:54.207 08:21:27 -- setup/common.sh@9 -- # [[ reset == output ]] 00:24:54.207 08:21:27 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:55.143 08:21:28 -- setup/acl.sh@52 -- # collect_setup_devs 00:24:55.143 08:21:28 -- setup/acl.sh@16 -- # local dev driver 00:24:55.143 08:21:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:24:55.143 08:21:28 -- setup/acl.sh@15 -- # setup output status 00:24:55.143 08:21:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:24:55.143 08:21:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:24:55.402 Hugepages 00:24:55.402 node hugesize free / total 00:24:55.402 08:21:28 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:24:55.402 08:21:28 -- setup/acl.sh@19 -- # continue 00:24:55.402 08:21:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:24:55.402 00:24:55.402 Type BDF Vendor Device NUMA Driver Device Block devices 00:24:55.402 08:21:28 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:24:55.402 08:21:28 -- setup/acl.sh@19 -- # continue 00:24:55.402 08:21:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:24:55.661 08:21:28 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:24:55.661 08:21:28 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:24:55.661 08:21:28 -- setup/acl.sh@20 -- # continue 00:24:55.661 08:21:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:24:55.661 08:21:28 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:24:55.661 08:21:28 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:24:55.661 08:21:28 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:24:55.661 08:21:28 -- setup/acl.sh@22 -- # devs+=("$dev") 00:24:55.661 08:21:28 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:24:55.661 08:21:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:24:55.921 08:21:29 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:24:55.921 08:21:29 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:24:55.921 08:21:29 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:24:55.921 08:21:29 -- setup/acl.sh@22 -- # devs+=("$dev") 00:24:55.921 08:21:29 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:24:55.921 08:21:29 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:24:55.921 08:21:29 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:24:55.921 08:21:29 -- setup/acl.sh@54 -- # run_test denied denied 00:24:55.921 08:21:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:55.921 08:21:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:55.921 08:21:29 -- common/autotest_common.sh@10 -- # set +x 00:24:55.921 ************************************ 00:24:55.921 START TEST denied 00:24:55.921 ************************************ 00:24:55.921 08:21:29 -- common/autotest_common.sh@1104 -- # denied 00:24:55.921 08:21:29 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:24:55.921 08:21:29 -- setup/acl.sh@38 -- # setup output config 00:24:55.921 08:21:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:24:55.921 08:21:29 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:24:55.921 08:21:29 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:24:56.859 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:24:56.859 08:21:30 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:24:56.859 08:21:30 -- setup/acl.sh@28 -- # local dev driver 00:24:56.859 08:21:30 -- setup/acl.sh@30 -- # for dev in "$@" 00:24:56.859 08:21:30 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:24:56.859 08:21:30 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:24:56.859 08:21:30 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:24:56.859 08:21:30 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:24:56.859 08:21:30 -- setup/acl.sh@41 -- # setup reset 00:24:56.859 08:21:30 -- setup/common.sh@9 -- # [[ reset == output ]] 00:24:56.859 08:21:30 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:57.796 ************************************ 00:24:57.796 END TEST denied 00:24:57.796 ************************************ 00:24:57.796 00:24:57.796 real 0m1.735s 00:24:57.796 user 0m0.610s 00:24:57.796 sys 0m1.099s 00:24:57.796 08:21:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:57.796 08:21:30 -- common/autotest_common.sh@10 -- # set +x 00:24:57.796 08:21:30 -- setup/acl.sh@55 -- # run_test allowed allowed 00:24:57.796 08:21:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:57.796 08:21:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:57.796 08:21:30 -- common/autotest_common.sh@10 -- # set +x 00:24:57.796 ************************************ 00:24:57.796 START TEST allowed 00:24:57.796 ************************************ 00:24:57.796 08:21:30 -- common/autotest_common.sh@1104 -- # allowed 00:24:57.796 08:21:30 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:24:57.796 08:21:30 -- setup/acl.sh@45 -- # setup output config 00:24:57.796 08:21:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:24:57.796 08:21:30 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:24:57.796 08:21:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:24:58.736 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:24:58.736 08:21:31 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:24:58.736 08:21:31 -- setup/acl.sh@28 -- # local dev driver 00:24:58.736 08:21:31 -- setup/acl.sh@30 -- # for dev in "$@" 00:24:58.736 08:21:31 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:24:58.736 08:21:31 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:24:58.736 08:21:31 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:24:58.736 08:21:31 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:24:58.736 08:21:31 -- setup/acl.sh@48 -- # setup reset 00:24:58.736 08:21:31 -- setup/common.sh@9 -- # [[ reset == output ]] 00:24:58.736 08:21:31 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:59.688 00:24:59.688 real 0m1.945s 00:24:59.688 user 0m0.728s 00:24:59.688 sys 0m1.241s 00:24:59.688 08:21:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:59.688 08:21:32 -- common/autotest_common.sh@10 -- # set +x 00:24:59.688 ************************************ 00:24:59.688 END TEST allowed 00:24:59.688 ************************************ 00:24:59.688 00:24:59.688 real 0m5.429s 00:24:59.688 user 0m2.027s 00:24:59.688 sys 0m3.445s 00:24:59.688 08:21:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:59.688 08:21:32 -- common/autotest_common.sh@10 -- # set +x 00:24:59.688 ************************************ 00:24:59.688 END TEST acl 00:24:59.688 ************************************ 00:24:59.688 08:21:32 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:24:59.688 08:21:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:59.688 08:21:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:59.688 08:21:32 -- common/autotest_common.sh@10 -- # set +x 00:24:59.688 ************************************ 00:24:59.688 START TEST hugepages 00:24:59.688 ************************************ 00:24:59.689 08:21:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:24:59.689 * Looking for test storage... 00:24:59.689 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:24:59.689 08:21:32 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:24:59.689 08:21:32 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:24:59.689 08:21:32 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:24:59.689 08:21:32 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:24:59.689 08:21:32 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:24:59.689 08:21:32 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:24:59.689 08:21:32 -- setup/common.sh@17 -- # local get=Hugepagesize 00:24:59.689 08:21:32 -- setup/common.sh@18 -- # local node= 00:24:59.689 08:21:32 -- setup/common.sh@19 -- # local var val 00:24:59.689 08:21:32 -- setup/common.sh@20 -- # local mem_f mem 00:24:59.689 08:21:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:24:59.689 08:21:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:24:59.689 08:21:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:24:59.689 08:21:32 -- setup/common.sh@28 -- # mapfile -t mem 00:24:59.689 08:21:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.689 08:21:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 5489504 kB' 'MemAvailable: 7419180 kB' 'Buffers: 2436 kB' 'Cached: 2139572 kB' 'SwapCached: 0 kB' 'Active: 873984 kB' 'Inactive: 1372316 kB' 'Active(anon): 114780 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1372316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 106496 kB' 'Mapped: 48836 kB' 'Shmem: 10488 kB' 'KReclaimable: 70636 kB' 'Slab: 147580 kB' 'SReclaimable: 70636 kB' 'SUnreclaim: 76944 kB' 'KernelStack: 6396 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412432 kB' 'Committed_AS: 332576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54984 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.689 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.689 08:21:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # continue 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # IFS=': ' 00:24:59.690 08:21:32 -- setup/common.sh@31 -- # read -r var val _ 00:24:59.690 08:21:32 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:24:59.690 08:21:32 -- setup/common.sh@33 -- # echo 2048 00:24:59.690 08:21:32 -- setup/common.sh@33 -- # return 0 00:24:59.690 08:21:32 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:24:59.690 08:21:32 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:24:59.690 08:21:32 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:24:59.690 08:21:32 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:24:59.690 08:21:32 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:24:59.690 08:21:32 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:24:59.690 08:21:32 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:24:59.690 08:21:32 -- setup/hugepages.sh@207 -- # get_nodes 00:24:59.690 08:21:32 -- setup/hugepages.sh@27 -- # local node 00:24:59.690 08:21:32 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:24:59.690 08:21:32 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:24:59.690 08:21:32 -- setup/hugepages.sh@32 -- # no_nodes=1 00:24:59.690 08:21:32 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:24:59.690 08:21:32 -- setup/hugepages.sh@208 -- # clear_hp 00:24:59.690 08:21:32 -- setup/hugepages.sh@37 -- # local node hp 00:24:59.690 08:21:32 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:24:59.690 08:21:32 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:24:59.690 08:21:32 -- setup/hugepages.sh@41 -- # echo 0 00:24:59.690 08:21:32 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:24:59.690 08:21:32 -- setup/hugepages.sh@41 -- # echo 0 00:24:59.690 08:21:33 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:24:59.690 08:21:33 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:24:59.690 08:21:33 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:24:59.690 08:21:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:59.690 08:21:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:59.690 08:21:33 -- common/autotest_common.sh@10 -- # set +x 00:24:59.949 ************************************ 00:24:59.949 START TEST default_setup 00:24:59.949 ************************************ 00:24:59.949 08:21:33 -- common/autotest_common.sh@1104 -- # default_setup 00:24:59.949 08:21:33 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:24:59.949 08:21:33 -- setup/hugepages.sh@49 -- # local size=2097152 00:24:59.949 08:21:33 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:24:59.949 08:21:33 -- setup/hugepages.sh@51 -- # shift 00:24:59.949 08:21:33 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:24:59.949 08:21:33 -- setup/hugepages.sh@52 -- # local node_ids 00:24:59.949 08:21:33 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:24:59.949 08:21:33 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:24:59.949 08:21:33 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:24:59.949 08:21:33 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:24:59.949 08:21:33 -- setup/hugepages.sh@62 -- # local user_nodes 00:24:59.949 08:21:33 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:24:59.949 08:21:33 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:24:59.949 08:21:33 -- setup/hugepages.sh@67 -- # nodes_test=() 00:24:59.949 08:21:33 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:24:59.949 08:21:33 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:24:59.949 08:21:33 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:24:59.949 08:21:33 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:24:59.949 08:21:33 -- setup/hugepages.sh@73 -- # return 0 00:24:59.949 08:21:33 -- setup/hugepages.sh@137 -- # setup output 00:24:59.949 08:21:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:24:59.949 08:21:33 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:00.518 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:00.779 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:25:00.779 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:25:00.779 08:21:34 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:25:00.779 08:21:34 -- setup/hugepages.sh@89 -- # local node 00:25:00.779 08:21:34 -- setup/hugepages.sh@90 -- # local sorted_t 00:25:00.779 08:21:34 -- setup/hugepages.sh@91 -- # local sorted_s 00:25:00.779 08:21:34 -- setup/hugepages.sh@92 -- # local surp 00:25:00.779 08:21:34 -- setup/hugepages.sh@93 -- # local resv 00:25:00.779 08:21:34 -- setup/hugepages.sh@94 -- # local anon 00:25:00.779 08:21:34 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:25:00.779 08:21:34 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:25:00.779 08:21:34 -- setup/common.sh@17 -- # local get=AnonHugePages 00:25:00.779 08:21:34 -- setup/common.sh@18 -- # local node= 00:25:00.779 08:21:34 -- setup/common.sh@19 -- # local var val 00:25:00.779 08:21:34 -- setup/common.sh@20 -- # local mem_f mem 00:25:00.779 08:21:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:25:00.779 08:21:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:25:00.779 08:21:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:25:00.779 08:21:34 -- setup/common.sh@28 -- # mapfile -t mem 00:25:00.779 08:21:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:25:00.779 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.779 08:21:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7575880 kB' 'MemAvailable: 9505392 kB' 'Buffers: 2436 kB' 'Cached: 2139564 kB' 'SwapCached: 0 kB' 'Active: 889444 kB' 'Inactive: 1372316 kB' 'Active(anon): 130240 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1372316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 121348 kB' 'Mapped: 48896 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 147196 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 76888 kB' 'KernelStack: 6400 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 356972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:25:00.779 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.779 08:21:34 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.779 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.779 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.779 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.779 08:21:34 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.779 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.779 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.779 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.779 08:21:34 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.779 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.779 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.779 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.779 08:21:34 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.779 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.779 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.779 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.779 08:21:34 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.779 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.779 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.779 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.779 08:21:34 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.779 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.779 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.779 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.779 08:21:34 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.779 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.779 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.779 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.779 08:21:34 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.779 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.779 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.779 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.779 08:21:34 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.779 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.779 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.779 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.780 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.780 08:21:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:00.780 08:21:34 -- setup/common.sh@33 -- # echo 0 00:25:00.780 08:21:34 -- setup/common.sh@33 -- # return 0 00:25:00.780 08:21:34 -- setup/hugepages.sh@97 -- # anon=0 00:25:00.780 08:21:34 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:25:00.780 08:21:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:25:00.780 08:21:34 -- setup/common.sh@18 -- # local node= 00:25:00.780 08:21:34 -- setup/common.sh@19 -- # local var val 00:25:00.780 08:21:34 -- setup/common.sh@20 -- # local mem_f mem 00:25:00.780 08:21:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:25:00.780 08:21:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:25:00.780 08:21:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:25:00.780 08:21:34 -- setup/common.sh@28 -- # mapfile -t mem 00:25:00.780 08:21:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:25:00.781 08:21:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7575880 kB' 'MemAvailable: 9505392 kB' 'Buffers: 2436 kB' 'Cached: 2139564 kB' 'SwapCached: 0 kB' 'Active: 889036 kB' 'Inactive: 1372316 kB' 'Active(anon): 129832 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1372316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 120956 kB' 'Mapped: 48776 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 147196 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 76888 kB' 'KernelStack: 6384 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 356972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.781 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.781 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:00.782 08:21:34 -- setup/common.sh@33 -- # echo 0 00:25:00.782 08:21:34 -- setup/common.sh@33 -- # return 0 00:25:00.782 08:21:34 -- setup/hugepages.sh@99 -- # surp=0 00:25:00.782 08:21:34 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:25:00.782 08:21:34 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:25:00.782 08:21:34 -- setup/common.sh@18 -- # local node= 00:25:00.782 08:21:34 -- setup/common.sh@19 -- # local var val 00:25:00.782 08:21:34 -- setup/common.sh@20 -- # local mem_f mem 00:25:00.782 08:21:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:25:00.782 08:21:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:25:00.782 08:21:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:25:00.782 08:21:34 -- setup/common.sh@28 -- # mapfile -t mem 00:25:00.782 08:21:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:25:00.782 08:21:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7575880 kB' 'MemAvailable: 9505392 kB' 'Buffers: 2436 kB' 'Cached: 2139564 kB' 'SwapCached: 0 kB' 'Active: 889192 kB' 'Inactive: 1372316 kB' 'Active(anon): 129988 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1372316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 121104 kB' 'Mapped: 48776 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 147196 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 76888 kB' 'KernelStack: 6368 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 356972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55016 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:00.782 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.782 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.783 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.783 08:21:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:00.783 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.783 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.783 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.783 08:21:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:00.783 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.783 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.783 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.783 08:21:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:00.783 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.783 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.783 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.783 08:21:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:00.783 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.783 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.783 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.783 08:21:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:00.783 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.783 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.783 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.783 08:21:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:00.783 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.783 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.783 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.783 08:21:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:00.783 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.783 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.783 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.783 08:21:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:00.783 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.783 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.783 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.783 08:21:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:00.783 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.783 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.783 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.783 08:21:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:00.783 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.783 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.783 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.783 08:21:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:00.783 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.783 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.783 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.783 08:21:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:00.783 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.783 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.783 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.783 08:21:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:00.783 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.783 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.783 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.783 08:21:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:00.783 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.783 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.783 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.783 08:21:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:00.783 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.783 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.783 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.783 08:21:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:00.783 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.783 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.783 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.783 08:21:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:00.783 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.783 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.783 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.783 08:21:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:00.783 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.783 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.783 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.783 08:21:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:00.783 08:21:34 -- setup/common.sh@32 -- # continue 00:25:00.783 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:00.783 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:00.783 08:21:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:00.783 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.044 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.044 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.044 08:21:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.044 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.044 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.044 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.044 08:21:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.044 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.044 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.044 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.044 08:21:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.044 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.044 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.044 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.044 08:21:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.044 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.044 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.044 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.044 08:21:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.044 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.044 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.044 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.044 08:21:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.044 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.044 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.044 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.044 08:21:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.044 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.044 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.044 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.044 08:21:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.044 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.044 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.044 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.044 08:21:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.044 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.044 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.044 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.044 08:21:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.044 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.044 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.044 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.044 08:21:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.044 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.044 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.044 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.044 08:21:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.044 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.045 08:21:34 -- setup/common.sh@33 -- # echo 0 00:25:01.045 08:21:34 -- setup/common.sh@33 -- # return 0 00:25:01.045 08:21:34 -- setup/hugepages.sh@100 -- # resv=0 00:25:01.045 nr_hugepages=1024 00:25:01.045 08:21:34 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:25:01.045 resv_hugepages=0 00:25:01.045 08:21:34 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:25:01.045 surplus_hugepages=0 00:25:01.045 08:21:34 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:25:01.045 anon_hugepages=0 00:25:01.045 08:21:34 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:25:01.045 08:21:34 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:25:01.045 08:21:34 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:25:01.045 08:21:34 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:25:01.045 08:21:34 -- setup/common.sh@17 -- # local get=HugePages_Total 00:25:01.045 08:21:34 -- setup/common.sh@18 -- # local node= 00:25:01.045 08:21:34 -- setup/common.sh@19 -- # local var val 00:25:01.045 08:21:34 -- setup/common.sh@20 -- # local mem_f mem 00:25:01.045 08:21:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:25:01.045 08:21:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:25:01.045 08:21:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:25:01.045 08:21:34 -- setup/common.sh@28 -- # mapfile -t mem 00:25:01.045 08:21:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:25:01.045 08:21:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7575880 kB' 'MemAvailable: 9505392 kB' 'Buffers: 2436 kB' 'Cached: 2139564 kB' 'SwapCached: 0 kB' 'Active: 888980 kB' 'Inactive: 1372316 kB' 'Active(anon): 129776 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1372316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 120896 kB' 'Mapped: 48776 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 147192 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 76884 kB' 'KernelStack: 6368 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 356972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55016 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.045 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.045 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.046 08:21:34 -- setup/common.sh@33 -- # echo 1024 00:25:01.046 08:21:34 -- setup/common.sh@33 -- # return 0 00:25:01.046 08:21:34 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:25:01.046 08:21:34 -- setup/hugepages.sh@112 -- # get_nodes 00:25:01.046 08:21:34 -- setup/hugepages.sh@27 -- # local node 00:25:01.046 08:21:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:25:01.046 08:21:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:25:01.046 08:21:34 -- setup/hugepages.sh@32 -- # no_nodes=1 00:25:01.046 08:21:34 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:25:01.046 08:21:34 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:25:01.046 08:21:34 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:25:01.046 08:21:34 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:25:01.046 08:21:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:25:01.046 08:21:34 -- setup/common.sh@18 -- # local node=0 00:25:01.046 08:21:34 -- setup/common.sh@19 -- # local var val 00:25:01.046 08:21:34 -- setup/common.sh@20 -- # local mem_f mem 00:25:01.046 08:21:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:25:01.046 08:21:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:25:01.046 08:21:34 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:25:01.046 08:21:34 -- setup/common.sh@28 -- # mapfile -t mem 00:25:01.046 08:21:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:25:01.046 08:21:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7575880 kB' 'MemUsed: 4666080 kB' 'SwapCached: 0 kB' 'Active: 888980 kB' 'Inactive: 1372316 kB' 'Active(anon): 129776 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1372316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'FilePages: 2142000 kB' 'Mapped: 48776 kB' 'AnonPages: 120896 kB' 'Shmem: 10464 kB' 'KernelStack: 6368 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70308 kB' 'Slab: 147192 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 76884 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.046 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.046 08:21:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.047 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.047 08:21:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.047 08:21:34 -- setup/common.sh@33 -- # echo 0 00:25:01.047 08:21:34 -- setup/common.sh@33 -- # return 0 00:25:01.047 08:21:34 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:25:01.047 08:21:34 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:25:01.047 08:21:34 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:25:01.047 08:21:34 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:25:01.047 node0=1024 expecting 1024 00:25:01.047 08:21:34 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:25:01.047 08:21:34 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:25:01.047 00:25:01.047 real 0m1.159s 00:25:01.047 user 0m0.500s 00:25:01.047 sys 0m0.629s 00:25:01.047 08:21:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:01.047 08:21:34 -- common/autotest_common.sh@10 -- # set +x 00:25:01.047 ************************************ 00:25:01.047 END TEST default_setup 00:25:01.047 ************************************ 00:25:01.047 08:21:34 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:25:01.047 08:21:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:01.047 08:21:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:01.047 08:21:34 -- common/autotest_common.sh@10 -- # set +x 00:25:01.047 ************************************ 00:25:01.047 START TEST per_node_1G_alloc 00:25:01.047 ************************************ 00:25:01.047 08:21:34 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:25:01.047 08:21:34 -- setup/hugepages.sh@143 -- # local IFS=, 00:25:01.047 08:21:34 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:25:01.047 08:21:34 -- setup/hugepages.sh@49 -- # local size=1048576 00:25:01.047 08:21:34 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:25:01.047 08:21:34 -- setup/hugepages.sh@51 -- # shift 00:25:01.047 08:21:34 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:25:01.047 08:21:34 -- setup/hugepages.sh@52 -- # local node_ids 00:25:01.047 08:21:34 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:25:01.047 08:21:34 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:25:01.047 08:21:34 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:25:01.047 08:21:34 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:25:01.047 08:21:34 -- setup/hugepages.sh@62 -- # local user_nodes 00:25:01.047 08:21:34 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:25:01.047 08:21:34 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:25:01.047 08:21:34 -- setup/hugepages.sh@67 -- # nodes_test=() 00:25:01.047 08:21:34 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:25:01.047 08:21:34 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:25:01.047 08:21:34 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:25:01.047 08:21:34 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:25:01.047 08:21:34 -- setup/hugepages.sh@73 -- # return 0 00:25:01.047 08:21:34 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:25:01.047 08:21:34 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:25:01.047 08:21:34 -- setup/hugepages.sh@146 -- # setup output 00:25:01.047 08:21:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:25:01.048 08:21:34 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:01.619 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:01.619 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:01.619 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:01.619 08:21:34 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:25:01.619 08:21:34 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:25:01.619 08:21:34 -- setup/hugepages.sh@89 -- # local node 00:25:01.619 08:21:34 -- setup/hugepages.sh@90 -- # local sorted_t 00:25:01.619 08:21:34 -- setup/hugepages.sh@91 -- # local sorted_s 00:25:01.619 08:21:34 -- setup/hugepages.sh@92 -- # local surp 00:25:01.619 08:21:34 -- setup/hugepages.sh@93 -- # local resv 00:25:01.619 08:21:34 -- setup/hugepages.sh@94 -- # local anon 00:25:01.619 08:21:34 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:25:01.619 08:21:34 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:25:01.619 08:21:34 -- setup/common.sh@17 -- # local get=AnonHugePages 00:25:01.619 08:21:34 -- setup/common.sh@18 -- # local node= 00:25:01.619 08:21:34 -- setup/common.sh@19 -- # local var val 00:25:01.619 08:21:34 -- setup/common.sh@20 -- # local mem_f mem 00:25:01.619 08:21:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:25:01.619 08:21:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:25:01.619 08:21:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:25:01.619 08:21:34 -- setup/common.sh@28 -- # mapfile -t mem 00:25:01.619 08:21:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.619 08:21:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8621960 kB' 'MemAvailable: 10551488 kB' 'Buffers: 2436 kB' 'Cached: 2139564 kB' 'SwapCached: 0 kB' 'Active: 889008 kB' 'Inactive: 1372332 kB' 'Active(anon): 129804 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1372332 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 121172 kB' 'Mapped: 48900 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 147252 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 76944 kB' 'KernelStack: 6400 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 356972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.619 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.619 08:21:34 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:01.620 08:21:34 -- setup/common.sh@33 -- # echo 0 00:25:01.620 08:21:34 -- setup/common.sh@33 -- # return 0 00:25:01.620 08:21:34 -- setup/hugepages.sh@97 -- # anon=0 00:25:01.620 08:21:34 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:25:01.620 08:21:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:25:01.620 08:21:34 -- setup/common.sh@18 -- # local node= 00:25:01.620 08:21:34 -- setup/common.sh@19 -- # local var val 00:25:01.620 08:21:34 -- setup/common.sh@20 -- # local mem_f mem 00:25:01.620 08:21:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:25:01.620 08:21:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:25:01.620 08:21:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:25:01.620 08:21:34 -- setup/common.sh@28 -- # mapfile -t mem 00:25:01.620 08:21:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.620 08:21:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8621960 kB' 'MemAvailable: 10551488 kB' 'Buffers: 2436 kB' 'Cached: 2139564 kB' 'SwapCached: 0 kB' 'Active: 889312 kB' 'Inactive: 1372332 kB' 'Active(anon): 130108 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1372332 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 121568 kB' 'Mapped: 49816 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 147252 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 76944 kB' 'KernelStack: 6416 kB' 'PageTables: 4492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 359536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55000 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.620 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.620 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.621 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.621 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.622 08:21:34 -- setup/common.sh@33 -- # echo 0 00:25:01.622 08:21:34 -- setup/common.sh@33 -- # return 0 00:25:01.622 08:21:34 -- setup/hugepages.sh@99 -- # surp=0 00:25:01.622 08:21:34 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:25:01.622 08:21:34 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:25:01.622 08:21:34 -- setup/common.sh@18 -- # local node= 00:25:01.622 08:21:34 -- setup/common.sh@19 -- # local var val 00:25:01.622 08:21:34 -- setup/common.sh@20 -- # local mem_f mem 00:25:01.622 08:21:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:25:01.622 08:21:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:25:01.622 08:21:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:25:01.622 08:21:34 -- setup/common.sh@28 -- # mapfile -t mem 00:25:01.622 08:21:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:25:01.622 08:21:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8621960 kB' 'MemAvailable: 10551488 kB' 'Buffers: 2436 kB' 'Cached: 2139564 kB' 'SwapCached: 0 kB' 'Active: 888772 kB' 'Inactive: 1372332 kB' 'Active(anon): 129568 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1372332 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 120976 kB' 'Mapped: 48776 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 147244 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 76936 kB' 'KernelStack: 6368 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 356972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55000 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.622 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.622 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:01.623 08:21:34 -- setup/common.sh@33 -- # echo 0 00:25:01.623 08:21:34 -- setup/common.sh@33 -- # return 0 00:25:01.623 08:21:34 -- setup/hugepages.sh@100 -- # resv=0 00:25:01.623 nr_hugepages=512 00:25:01.623 08:21:34 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:25:01.623 resv_hugepages=0 00:25:01.623 08:21:34 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:25:01.623 surplus_hugepages=0 00:25:01.623 08:21:34 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:25:01.623 anon_hugepages=0 00:25:01.623 08:21:34 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:25:01.623 08:21:34 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:25:01.623 08:21:34 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:25:01.623 08:21:34 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:25:01.623 08:21:34 -- setup/common.sh@17 -- # local get=HugePages_Total 00:25:01.623 08:21:34 -- setup/common.sh@18 -- # local node= 00:25:01.623 08:21:34 -- setup/common.sh@19 -- # local var val 00:25:01.623 08:21:34 -- setup/common.sh@20 -- # local mem_f mem 00:25:01.623 08:21:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:25:01.623 08:21:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:25:01.623 08:21:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:25:01.623 08:21:34 -- setup/common.sh@28 -- # mapfile -t mem 00:25:01.623 08:21:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.623 08:21:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8621960 kB' 'MemAvailable: 10551488 kB' 'Buffers: 2436 kB' 'Cached: 2139564 kB' 'SwapCached: 0 kB' 'Active: 888956 kB' 'Inactive: 1372332 kB' 'Active(anon): 129752 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1372332 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 120908 kB' 'Mapped: 48776 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 147240 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 76932 kB' 'KernelStack: 6368 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 356972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55000 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.623 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.623 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:01.624 08:21:34 -- setup/common.sh@33 -- # echo 512 00:25:01.624 08:21:34 -- setup/common.sh@33 -- # return 0 00:25:01.624 08:21:34 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:25:01.624 08:21:34 -- setup/hugepages.sh@112 -- # get_nodes 00:25:01.624 08:21:34 -- setup/hugepages.sh@27 -- # local node 00:25:01.624 08:21:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:25:01.624 08:21:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:25:01.624 08:21:34 -- setup/hugepages.sh@32 -- # no_nodes=1 00:25:01.624 08:21:34 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:25:01.624 08:21:34 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:25:01.624 08:21:34 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:25:01.624 08:21:34 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:25:01.624 08:21:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:25:01.624 08:21:34 -- setup/common.sh@18 -- # local node=0 00:25:01.624 08:21:34 -- setup/common.sh@19 -- # local var val 00:25:01.624 08:21:34 -- setup/common.sh@20 -- # local mem_f mem 00:25:01.624 08:21:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:25:01.624 08:21:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:25:01.624 08:21:34 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:25:01.624 08:21:34 -- setup/common.sh@28 -- # mapfile -t mem 00:25:01.624 08:21:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:25:01.624 08:21:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8621960 kB' 'MemUsed: 3620000 kB' 'SwapCached: 0 kB' 'Active: 888936 kB' 'Inactive: 1372332 kB' 'Active(anon): 129732 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1372332 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'FilePages: 2142000 kB' 'Mapped: 48776 kB' 'AnonPages: 121148 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70308 kB' 'Slab: 147232 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 76924 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.624 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.624 08:21:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # continue 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # IFS=': ' 00:25:01.625 08:21:34 -- setup/common.sh@31 -- # read -r var val _ 00:25:01.625 08:21:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:01.625 08:21:34 -- setup/common.sh@33 -- # echo 0 00:25:01.625 08:21:34 -- setup/common.sh@33 -- # return 0 00:25:01.625 08:21:34 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:25:01.625 08:21:34 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:25:01.625 08:21:34 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:25:01.625 08:21:34 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:25:01.625 node0=512 expecting 512 00:25:01.625 08:21:34 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:25:01.625 08:21:34 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:25:01.625 00:25:01.625 real 0m0.661s 00:25:01.625 user 0m0.331s 00:25:01.625 sys 0m0.372s 00:25:01.625 08:21:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:01.625 08:21:34 -- common/autotest_common.sh@10 -- # set +x 00:25:01.625 ************************************ 00:25:01.625 END TEST per_node_1G_alloc 00:25:01.625 ************************************ 00:25:01.625 08:21:34 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:25:01.625 08:21:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:01.625 08:21:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:01.625 08:21:34 -- common/autotest_common.sh@10 -- # set +x 00:25:01.625 ************************************ 00:25:01.625 START TEST even_2G_alloc 00:25:01.625 ************************************ 00:25:01.625 08:21:34 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:25:01.625 08:21:34 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:25:01.625 08:21:34 -- setup/hugepages.sh@49 -- # local size=2097152 00:25:01.625 08:21:34 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:25:01.625 08:21:34 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:25:01.625 08:21:34 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:25:01.625 08:21:34 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:25:01.625 08:21:34 -- setup/hugepages.sh@62 -- # user_nodes=() 00:25:01.625 08:21:34 -- setup/hugepages.sh@62 -- # local user_nodes 00:25:01.625 08:21:34 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:25:01.625 08:21:34 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:25:01.625 08:21:34 -- setup/hugepages.sh@67 -- # nodes_test=() 00:25:01.625 08:21:34 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:25:01.625 08:21:34 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:25:01.625 08:21:34 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:25:01.625 08:21:34 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:25:01.625 08:21:34 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:25:01.625 08:21:34 -- setup/hugepages.sh@83 -- # : 0 00:25:01.625 08:21:34 -- setup/hugepages.sh@84 -- # : 0 00:25:01.625 08:21:34 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:25:01.625 08:21:34 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:25:01.625 08:21:34 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:25:01.625 08:21:34 -- setup/hugepages.sh@153 -- # setup output 00:25:01.625 08:21:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:25:01.625 08:21:34 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:02.192 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:02.192 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:02.192 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:02.192 08:21:35 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:25:02.192 08:21:35 -- setup/hugepages.sh@89 -- # local node 00:25:02.192 08:21:35 -- setup/hugepages.sh@90 -- # local sorted_t 00:25:02.192 08:21:35 -- setup/hugepages.sh@91 -- # local sorted_s 00:25:02.192 08:21:35 -- setup/hugepages.sh@92 -- # local surp 00:25:02.192 08:21:35 -- setup/hugepages.sh@93 -- # local resv 00:25:02.192 08:21:35 -- setup/hugepages.sh@94 -- # local anon 00:25:02.192 08:21:35 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:25:02.192 08:21:35 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:25:02.192 08:21:35 -- setup/common.sh@17 -- # local get=AnonHugePages 00:25:02.192 08:21:35 -- setup/common.sh@18 -- # local node= 00:25:02.192 08:21:35 -- setup/common.sh@19 -- # local var val 00:25:02.192 08:21:35 -- setup/common.sh@20 -- # local mem_f mem 00:25:02.192 08:21:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:25:02.192 08:21:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:25:02.192 08:21:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:25:02.192 08:21:35 -- setup/common.sh@28 -- # mapfile -t mem 00:25:02.192 08:21:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.192 08:21:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7567120 kB' 'MemAvailable: 9496648 kB' 'Buffers: 2436 kB' 'Cached: 2139564 kB' 'SwapCached: 0 kB' 'Active: 888880 kB' 'Inactive: 1372332 kB' 'Active(anon): 129676 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1372332 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 120784 kB' 'Mapped: 48844 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 147324 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 77016 kB' 'KernelStack: 6408 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 356972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55064 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.192 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.192 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.193 08:21:35 -- setup/common.sh@33 -- # echo 0 00:25:02.193 08:21:35 -- setup/common.sh@33 -- # return 0 00:25:02.193 08:21:35 -- setup/hugepages.sh@97 -- # anon=0 00:25:02.193 08:21:35 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:25:02.193 08:21:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:25:02.193 08:21:35 -- setup/common.sh@18 -- # local node= 00:25:02.193 08:21:35 -- setup/common.sh@19 -- # local var val 00:25:02.193 08:21:35 -- setup/common.sh@20 -- # local mem_f mem 00:25:02.193 08:21:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:25:02.193 08:21:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:25:02.193 08:21:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:25:02.193 08:21:35 -- setup/common.sh@28 -- # mapfile -t mem 00:25:02.193 08:21:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.193 08:21:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7567120 kB' 'MemAvailable: 9496648 kB' 'Buffers: 2436 kB' 'Cached: 2139568 kB' 'SwapCached: 0 kB' 'Active: 888956 kB' 'Inactive: 1372332 kB' 'Active(anon): 129752 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1372332 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 120848 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 147364 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 77056 kB' 'KernelStack: 6432 kB' 'PageTables: 4512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 356604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55016 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.193 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.193 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.194 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.194 08:21:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.457 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.457 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.457 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.457 08:21:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.457 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.457 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.457 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.457 08:21:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.457 08:21:35 -- setup/common.sh@33 -- # echo 0 00:25:02.457 08:21:35 -- setup/common.sh@33 -- # return 0 00:25:02.457 08:21:35 -- setup/hugepages.sh@99 -- # surp=0 00:25:02.457 08:21:35 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:25:02.457 08:21:35 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:25:02.457 08:21:35 -- setup/common.sh@18 -- # local node= 00:25:02.457 08:21:35 -- setup/common.sh@19 -- # local var val 00:25:02.457 08:21:35 -- setup/common.sh@20 -- # local mem_f mem 00:25:02.457 08:21:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:25:02.457 08:21:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:25:02.457 08:21:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:25:02.457 08:21:35 -- setup/common.sh@28 -- # mapfile -t mem 00:25:02.457 08:21:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:25:02.457 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.457 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.457 08:21:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7567404 kB' 'MemAvailable: 9496932 kB' 'Buffers: 2436 kB' 'Cached: 2139568 kB' 'SwapCached: 0 kB' 'Active: 889032 kB' 'Inactive: 1372332 kB' 'Active(anon): 129828 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1372332 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 121256 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 147360 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 77052 kB' 'KernelStack: 6432 kB' 'PageTables: 4512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 356972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55016 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:25:02.457 08:21:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.457 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.457 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.457 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.457 08:21:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.457 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.457 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.457 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.457 08:21:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.457 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.457 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.457 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.457 08:21:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.457 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.458 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.458 08:21:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.459 08:21:35 -- setup/common.sh@33 -- # echo 0 00:25:02.459 08:21:35 -- setup/common.sh@33 -- # return 0 00:25:02.459 08:21:35 -- setup/hugepages.sh@100 -- # resv=0 00:25:02.459 nr_hugepages=1024 00:25:02.459 08:21:35 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:25:02.459 resv_hugepages=0 00:25:02.459 08:21:35 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:25:02.459 surplus_hugepages=0 00:25:02.459 08:21:35 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:25:02.459 anon_hugepages=0 00:25:02.459 08:21:35 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:25:02.459 08:21:35 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:25:02.459 08:21:35 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:25:02.459 08:21:35 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:25:02.459 08:21:35 -- setup/common.sh@17 -- # local get=HugePages_Total 00:25:02.459 08:21:35 -- setup/common.sh@18 -- # local node= 00:25:02.459 08:21:35 -- setup/common.sh@19 -- # local var val 00:25:02.459 08:21:35 -- setup/common.sh@20 -- # local mem_f mem 00:25:02.459 08:21:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:25:02.459 08:21:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:25:02.459 08:21:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:25:02.459 08:21:35 -- setup/common.sh@28 -- # mapfile -t mem 00:25:02.459 08:21:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.459 08:21:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7567404 kB' 'MemAvailable: 9496932 kB' 'Buffers: 2436 kB' 'Cached: 2139568 kB' 'SwapCached: 0 kB' 'Active: 888868 kB' 'Inactive: 1372332 kB' 'Active(anon): 129664 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1372332 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 121136 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 147348 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 77040 kB' 'KernelStack: 6448 kB' 'PageTables: 4572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 356972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55016 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.459 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.459 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.460 08:21:35 -- setup/common.sh@33 -- # echo 1024 00:25:02.460 08:21:35 -- setup/common.sh@33 -- # return 0 00:25:02.460 08:21:35 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:25:02.460 08:21:35 -- setup/hugepages.sh@112 -- # get_nodes 00:25:02.460 08:21:35 -- setup/hugepages.sh@27 -- # local node 00:25:02.460 08:21:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:25:02.460 08:21:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:25:02.460 08:21:35 -- setup/hugepages.sh@32 -- # no_nodes=1 00:25:02.460 08:21:35 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:25:02.460 08:21:35 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:25:02.460 08:21:35 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:25:02.460 08:21:35 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:25:02.460 08:21:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:25:02.460 08:21:35 -- setup/common.sh@18 -- # local node=0 00:25:02.460 08:21:35 -- setup/common.sh@19 -- # local var val 00:25:02.460 08:21:35 -- setup/common.sh@20 -- # local mem_f mem 00:25:02.460 08:21:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:25:02.460 08:21:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:25:02.460 08:21:35 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:25:02.460 08:21:35 -- setup/common.sh@28 -- # mapfile -t mem 00:25:02.460 08:21:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.460 08:21:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7567404 kB' 'MemUsed: 4674556 kB' 'SwapCached: 0 kB' 'Active: 888852 kB' 'Inactive: 1372332 kB' 'Active(anon): 129648 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1372332 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'FilePages: 2142004 kB' 'Mapped: 48716 kB' 'AnonPages: 120836 kB' 'Shmem: 10464 kB' 'KernelStack: 6432 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70308 kB' 'Slab: 147344 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 77036 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.460 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.460 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # continue 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.461 08:21:35 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.461 08:21:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.461 08:21:35 -- setup/common.sh@33 -- # echo 0 00:25:02.461 08:21:35 -- setup/common.sh@33 -- # return 0 00:25:02.461 08:21:35 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:25:02.461 08:21:35 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:25:02.461 08:21:35 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:25:02.461 08:21:35 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:25:02.461 node0=1024 expecting 1024 00:25:02.461 08:21:35 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:25:02.461 08:21:35 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:25:02.461 00:25:02.461 real 0m0.657s 00:25:02.461 user 0m0.282s 00:25:02.461 sys 0m0.419s 00:25:02.461 08:21:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:02.461 08:21:35 -- common/autotest_common.sh@10 -- # set +x 00:25:02.461 ************************************ 00:25:02.461 END TEST even_2G_alloc 00:25:02.461 ************************************ 00:25:02.461 08:21:35 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:25:02.461 08:21:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:02.461 08:21:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:02.461 08:21:35 -- common/autotest_common.sh@10 -- # set +x 00:25:02.461 ************************************ 00:25:02.461 START TEST odd_alloc 00:25:02.461 ************************************ 00:25:02.461 08:21:35 -- common/autotest_common.sh@1104 -- # odd_alloc 00:25:02.461 08:21:35 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:25:02.461 08:21:35 -- setup/hugepages.sh@49 -- # local size=2098176 00:25:02.461 08:21:35 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:25:02.461 08:21:35 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:25:02.461 08:21:35 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:25:02.461 08:21:35 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:25:02.461 08:21:35 -- setup/hugepages.sh@62 -- # user_nodes=() 00:25:02.461 08:21:35 -- setup/hugepages.sh@62 -- # local user_nodes 00:25:02.461 08:21:35 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:25:02.461 08:21:35 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:25:02.461 08:21:35 -- setup/hugepages.sh@67 -- # nodes_test=() 00:25:02.461 08:21:35 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:25:02.461 08:21:35 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:25:02.461 08:21:35 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:25:02.461 08:21:35 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:25:02.461 08:21:35 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:25:02.461 08:21:35 -- setup/hugepages.sh@83 -- # : 0 00:25:02.461 08:21:35 -- setup/hugepages.sh@84 -- # : 0 00:25:02.461 08:21:35 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:25:02.461 08:21:35 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:25:02.461 08:21:35 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:25:02.461 08:21:35 -- setup/hugepages.sh@160 -- # setup output 00:25:02.461 08:21:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:25:02.461 08:21:35 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:02.723 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:02.985 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:02.985 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:02.985 08:21:36 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:25:02.985 08:21:36 -- setup/hugepages.sh@89 -- # local node 00:25:02.985 08:21:36 -- setup/hugepages.sh@90 -- # local sorted_t 00:25:02.985 08:21:36 -- setup/hugepages.sh@91 -- # local sorted_s 00:25:02.985 08:21:36 -- setup/hugepages.sh@92 -- # local surp 00:25:02.985 08:21:36 -- setup/hugepages.sh@93 -- # local resv 00:25:02.985 08:21:36 -- setup/hugepages.sh@94 -- # local anon 00:25:02.985 08:21:36 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:25:02.985 08:21:36 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:25:02.985 08:21:36 -- setup/common.sh@17 -- # local get=AnonHugePages 00:25:02.985 08:21:36 -- setup/common.sh@18 -- # local node= 00:25:02.985 08:21:36 -- setup/common.sh@19 -- # local var val 00:25:02.985 08:21:36 -- setup/common.sh@20 -- # local mem_f mem 00:25:02.985 08:21:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:25:02.985 08:21:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:25:02.985 08:21:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:25:02.985 08:21:36 -- setup/common.sh@28 -- # mapfile -t mem 00:25:02.985 08:21:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:25:02.985 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.985 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.985 08:21:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7566220 kB' 'MemAvailable: 9495756 kB' 'Buffers: 2436 kB' 'Cached: 2139572 kB' 'SwapCached: 0 kB' 'Active: 889020 kB' 'Inactive: 1372340 kB' 'Active(anon): 129816 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1372340 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 16 kB' 'AnonPages: 121252 kB' 'Mapped: 49084 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 147384 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 77076 kB' 'KernelStack: 6428 kB' 'PageTables: 4464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 356972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55016 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:25:02.985 08:21:36 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.985 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.985 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.985 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.985 08:21:36 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.985 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.985 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.985 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.985 08:21:36 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.985 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.985 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.985 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.985 08:21:36 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.985 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.985 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.985 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.985 08:21:36 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.985 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.985 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.986 08:21:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:02.986 08:21:36 -- setup/common.sh@33 -- # echo 0 00:25:02.986 08:21:36 -- setup/common.sh@33 -- # return 0 00:25:02.986 08:21:36 -- setup/hugepages.sh@97 -- # anon=0 00:25:02.986 08:21:36 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:25:02.986 08:21:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:25:02.986 08:21:36 -- setup/common.sh@18 -- # local node= 00:25:02.986 08:21:36 -- setup/common.sh@19 -- # local var val 00:25:02.986 08:21:36 -- setup/common.sh@20 -- # local mem_f mem 00:25:02.986 08:21:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:25:02.986 08:21:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:25:02.986 08:21:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:25:02.986 08:21:36 -- setup/common.sh@28 -- # mapfile -t mem 00:25:02.986 08:21:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.986 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.987 08:21:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7566220 kB' 'MemAvailable: 9495756 kB' 'Buffers: 2436 kB' 'Cached: 2139572 kB' 'SwapCached: 0 kB' 'Active: 888996 kB' 'Inactive: 1372340 kB' 'Active(anon): 129792 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1372340 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 168 kB' 'Writeback: 0 kB' 'AnonPages: 121228 kB' 'Mapped: 48984 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 147396 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 77088 kB' 'KernelStack: 6436 kB' 'PageTables: 4600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 356972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55000 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.987 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.987 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.988 08:21:36 -- setup/common.sh@33 -- # echo 0 00:25:02.988 08:21:36 -- setup/common.sh@33 -- # return 0 00:25:02.988 08:21:36 -- setup/hugepages.sh@99 -- # surp=0 00:25:02.988 08:21:36 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:25:02.988 08:21:36 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:25:02.988 08:21:36 -- setup/common.sh@18 -- # local node= 00:25:02.988 08:21:36 -- setup/common.sh@19 -- # local var val 00:25:02.988 08:21:36 -- setup/common.sh@20 -- # local mem_f mem 00:25:02.988 08:21:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:25:02.988 08:21:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:25:02.988 08:21:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:25:02.988 08:21:36 -- setup/common.sh@28 -- # mapfile -t mem 00:25:02.988 08:21:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.988 08:21:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7566220 kB' 'MemAvailable: 9495756 kB' 'Buffers: 2436 kB' 'Cached: 2139572 kB' 'SwapCached: 0 kB' 'Active: 888932 kB' 'Inactive: 1372340 kB' 'Active(anon): 129728 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1372340 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 168 kB' 'Writeback: 0 kB' 'AnonPages: 121196 kB' 'Mapped: 48984 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 147396 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 77088 kB' 'KernelStack: 6420 kB' 'PageTables: 4548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 356972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55016 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.988 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.988 08:21:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:02.989 08:21:36 -- setup/common.sh@33 -- # echo 0 00:25:02.989 08:21:36 -- setup/common.sh@33 -- # return 0 00:25:02.989 08:21:36 -- setup/hugepages.sh@100 -- # resv=0 00:25:02.989 nr_hugepages=1025 00:25:02.989 08:21:36 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:25:02.989 resv_hugepages=0 00:25:02.989 08:21:36 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:25:02.989 surplus_hugepages=0 00:25:02.989 08:21:36 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:25:02.989 anon_hugepages=0 00:25:02.989 08:21:36 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:25:02.989 08:21:36 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:25:02.989 08:21:36 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:25:02.989 08:21:36 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:25:02.989 08:21:36 -- setup/common.sh@17 -- # local get=HugePages_Total 00:25:02.989 08:21:36 -- setup/common.sh@18 -- # local node= 00:25:02.989 08:21:36 -- setup/common.sh@19 -- # local var val 00:25:02.989 08:21:36 -- setup/common.sh@20 -- # local mem_f mem 00:25:02.989 08:21:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:25:02.989 08:21:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:25:02.989 08:21:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:25:02.989 08:21:36 -- setup/common.sh@28 -- # mapfile -t mem 00:25:02.989 08:21:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:25:02.989 08:21:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7566512 kB' 'MemAvailable: 9496048 kB' 'Buffers: 2436 kB' 'Cached: 2139572 kB' 'SwapCached: 0 kB' 'Active: 888932 kB' 'Inactive: 1372340 kB' 'Active(anon): 129728 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1372340 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 168 kB' 'Writeback: 0 kB' 'AnonPages: 121152 kB' 'Mapped: 48984 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 147396 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 77088 kB' 'KernelStack: 6420 kB' 'PageTables: 4548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 356972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55016 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.989 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.989 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.990 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.990 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:02.991 08:21:36 -- setup/common.sh@33 -- # echo 1025 00:25:02.991 08:21:36 -- setup/common.sh@33 -- # return 0 00:25:02.991 08:21:36 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:25:02.991 08:21:36 -- setup/hugepages.sh@112 -- # get_nodes 00:25:02.991 08:21:36 -- setup/hugepages.sh@27 -- # local node 00:25:02.991 08:21:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:25:02.991 08:21:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:25:02.991 08:21:36 -- setup/hugepages.sh@32 -- # no_nodes=1 00:25:02.991 08:21:36 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:25:02.991 08:21:36 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:25:02.991 08:21:36 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:25:02.991 08:21:36 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:25:02.991 08:21:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:25:02.991 08:21:36 -- setup/common.sh@18 -- # local node=0 00:25:02.991 08:21:36 -- setup/common.sh@19 -- # local var val 00:25:02.991 08:21:36 -- setup/common.sh@20 -- # local mem_f mem 00:25:02.991 08:21:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:25:02.991 08:21:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:25:02.991 08:21:36 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:25:02.991 08:21:36 -- setup/common.sh@28 -- # mapfile -t mem 00:25:02.991 08:21:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.991 08:21:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7566512 kB' 'MemUsed: 4675448 kB' 'SwapCached: 0 kB' 'Active: 889180 kB' 'Inactive: 1372340 kB' 'Active(anon): 129976 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1372340 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 168 kB' 'Writeback: 0 kB' 'FilePages: 2142008 kB' 'Mapped: 48984 kB' 'AnonPages: 121400 kB' 'Shmem: 10464 kB' 'KernelStack: 6420 kB' 'PageTables: 4548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70308 kB' 'Slab: 147396 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 77088 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.991 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.991 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.992 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.992 08:21:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.992 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.992 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.992 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.992 08:21:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.992 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.992 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.992 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.992 08:21:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.992 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.992 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.992 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.992 08:21:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.992 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.992 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.992 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.992 08:21:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.992 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.992 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.992 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.992 08:21:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.992 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.992 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.992 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.992 08:21:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.992 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.992 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.992 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.992 08:21:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.992 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.992 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.992 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.992 08:21:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.992 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.992 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.992 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.992 08:21:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.992 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.992 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.992 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.992 08:21:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.992 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.992 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.992 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.992 08:21:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.992 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.992 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.992 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.992 08:21:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.992 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.992 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.992 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.992 08:21:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.992 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.992 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.992 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.992 08:21:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.992 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.992 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.992 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.992 08:21:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.992 08:21:36 -- setup/common.sh@32 -- # continue 00:25:02.992 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:02.992 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:02.992 08:21:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:02.992 08:21:36 -- setup/common.sh@33 -- # echo 0 00:25:02.992 08:21:36 -- setup/common.sh@33 -- # return 0 00:25:02.992 08:21:36 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:25:02.992 08:21:36 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:25:02.992 08:21:36 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:25:02.992 08:21:36 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:25:02.992 08:21:36 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:25:02.992 node0=1025 expecting 1025 00:25:02.992 08:21:36 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:25:02.992 00:25:02.992 real 0m0.671s 00:25:02.992 user 0m0.287s 00:25:02.992 sys 0m0.425s 00:25:02.992 08:21:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:02.992 08:21:36 -- common/autotest_common.sh@10 -- # set +x 00:25:02.992 ************************************ 00:25:02.992 END TEST odd_alloc 00:25:02.992 ************************************ 00:25:03.251 08:21:36 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:25:03.251 08:21:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:03.251 08:21:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:03.251 08:21:36 -- common/autotest_common.sh@10 -- # set +x 00:25:03.251 ************************************ 00:25:03.251 START TEST custom_alloc 00:25:03.251 ************************************ 00:25:03.251 08:21:36 -- common/autotest_common.sh@1104 -- # custom_alloc 00:25:03.251 08:21:36 -- setup/hugepages.sh@167 -- # local IFS=, 00:25:03.251 08:21:36 -- setup/hugepages.sh@169 -- # local node 00:25:03.251 08:21:36 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:25:03.251 08:21:36 -- setup/hugepages.sh@170 -- # local nodes_hp 00:25:03.251 08:21:36 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:25:03.251 08:21:36 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:25:03.251 08:21:36 -- setup/hugepages.sh@49 -- # local size=1048576 00:25:03.251 08:21:36 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:25:03.251 08:21:36 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:25:03.251 08:21:36 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:25:03.251 08:21:36 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:25:03.251 08:21:36 -- setup/hugepages.sh@62 -- # user_nodes=() 00:25:03.251 08:21:36 -- setup/hugepages.sh@62 -- # local user_nodes 00:25:03.251 08:21:36 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:25:03.251 08:21:36 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:25:03.251 08:21:36 -- setup/hugepages.sh@67 -- # nodes_test=() 00:25:03.251 08:21:36 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:25:03.251 08:21:36 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:25:03.251 08:21:36 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:25:03.251 08:21:36 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:25:03.251 08:21:36 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:25:03.251 08:21:36 -- setup/hugepages.sh@83 -- # : 0 00:25:03.251 08:21:36 -- setup/hugepages.sh@84 -- # : 0 00:25:03.251 08:21:36 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:25:03.251 08:21:36 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:25:03.251 08:21:36 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:25:03.251 08:21:36 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:25:03.251 08:21:36 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:25:03.251 08:21:36 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:25:03.251 08:21:36 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:25:03.251 08:21:36 -- setup/hugepages.sh@62 -- # user_nodes=() 00:25:03.251 08:21:36 -- setup/hugepages.sh@62 -- # local user_nodes 00:25:03.251 08:21:36 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:25:03.251 08:21:36 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:25:03.251 08:21:36 -- setup/hugepages.sh@67 -- # nodes_test=() 00:25:03.251 08:21:36 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:25:03.251 08:21:36 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:25:03.251 08:21:36 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:25:03.252 08:21:36 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:25:03.252 08:21:36 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:25:03.252 08:21:36 -- setup/hugepages.sh@78 -- # return 0 00:25:03.252 08:21:36 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:25:03.252 08:21:36 -- setup/hugepages.sh@187 -- # setup output 00:25:03.252 08:21:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:25:03.252 08:21:36 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:03.511 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:03.774 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:03.774 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:03.774 08:21:36 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:25:03.774 08:21:36 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:25:03.774 08:21:36 -- setup/hugepages.sh@89 -- # local node 00:25:03.774 08:21:36 -- setup/hugepages.sh@90 -- # local sorted_t 00:25:03.774 08:21:36 -- setup/hugepages.sh@91 -- # local sorted_s 00:25:03.774 08:21:36 -- setup/hugepages.sh@92 -- # local surp 00:25:03.774 08:21:36 -- setup/hugepages.sh@93 -- # local resv 00:25:03.774 08:21:36 -- setup/hugepages.sh@94 -- # local anon 00:25:03.774 08:21:36 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:25:03.774 08:21:36 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:25:03.774 08:21:36 -- setup/common.sh@17 -- # local get=AnonHugePages 00:25:03.774 08:21:36 -- setup/common.sh@18 -- # local node= 00:25:03.774 08:21:36 -- setup/common.sh@19 -- # local var val 00:25:03.774 08:21:36 -- setup/common.sh@20 -- # local mem_f mem 00:25:03.774 08:21:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:25:03.774 08:21:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:25:03.774 08:21:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:25:03.774 08:21:36 -- setup/common.sh@28 -- # mapfile -t mem 00:25:03.774 08:21:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.774 08:21:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8618228 kB' 'MemAvailable: 10547760 kB' 'Buffers: 2436 kB' 'Cached: 2139568 kB' 'SwapCached: 0 kB' 'Active: 889328 kB' 'Inactive: 1372336 kB' 'Active(anon): 130124 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1372336 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 121304 kB' 'Mapped: 49024 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 147348 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 77040 kB' 'KernelStack: 6440 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 356972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55048 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.774 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.774 08:21:36 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:03.775 08:21:36 -- setup/common.sh@33 -- # echo 0 00:25:03.775 08:21:36 -- setup/common.sh@33 -- # return 0 00:25:03.775 08:21:36 -- setup/hugepages.sh@97 -- # anon=0 00:25:03.775 08:21:36 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:25:03.775 08:21:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:25:03.775 08:21:36 -- setup/common.sh@18 -- # local node= 00:25:03.775 08:21:36 -- setup/common.sh@19 -- # local var val 00:25:03.775 08:21:36 -- setup/common.sh@20 -- # local mem_f mem 00:25:03.775 08:21:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:25:03.775 08:21:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:25:03.775 08:21:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:25:03.775 08:21:36 -- setup/common.sh@28 -- # mapfile -t mem 00:25:03.775 08:21:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.775 08:21:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8618228 kB' 'MemAvailable: 10547760 kB' 'Buffers: 2436 kB' 'Cached: 2139568 kB' 'SwapCached: 0 kB' 'Active: 889240 kB' 'Inactive: 1372336 kB' 'Active(anon): 130036 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1372336 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 121172 kB' 'Mapped: 48776 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 147348 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 77040 kB' 'KernelStack: 6416 kB' 'PageTables: 4488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 356972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.775 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.775 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.776 08:21:36 -- setup/common.sh@33 -- # echo 0 00:25:03.776 08:21:36 -- setup/common.sh@33 -- # return 0 00:25:03.776 08:21:36 -- setup/hugepages.sh@99 -- # surp=0 00:25:03.776 08:21:36 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:25:03.776 08:21:36 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:25:03.776 08:21:36 -- setup/common.sh@18 -- # local node= 00:25:03.776 08:21:36 -- setup/common.sh@19 -- # local var val 00:25:03.776 08:21:36 -- setup/common.sh@20 -- # local mem_f mem 00:25:03.776 08:21:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:25:03.776 08:21:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:25:03.776 08:21:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:25:03.776 08:21:36 -- setup/common.sh@28 -- # mapfile -t mem 00:25:03.776 08:21:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:25:03.776 08:21:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8618228 kB' 'MemAvailable: 10547760 kB' 'Buffers: 2436 kB' 'Cached: 2139568 kB' 'SwapCached: 0 kB' 'Active: 888908 kB' 'Inactive: 1372336 kB' 'Active(anon): 129704 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1372336 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120844 kB' 'Mapped: 48776 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 147344 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 77036 kB' 'KernelStack: 6384 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 356972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55016 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.776 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.776 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.777 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.777 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:03.778 08:21:36 -- setup/common.sh@33 -- # echo 0 00:25:03.778 08:21:36 -- setup/common.sh@33 -- # return 0 00:25:03.778 08:21:36 -- setup/hugepages.sh@100 -- # resv=0 00:25:03.778 nr_hugepages=512 00:25:03.778 08:21:36 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:25:03.778 resv_hugepages=0 00:25:03.778 08:21:36 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:25:03.778 surplus_hugepages=0 00:25:03.778 08:21:36 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:25:03.778 anon_hugepages=0 00:25:03.778 08:21:36 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:25:03.778 08:21:36 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:25:03.778 08:21:36 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:25:03.778 08:21:36 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:25:03.778 08:21:36 -- setup/common.sh@17 -- # local get=HugePages_Total 00:25:03.778 08:21:36 -- setup/common.sh@18 -- # local node= 00:25:03.778 08:21:36 -- setup/common.sh@19 -- # local var val 00:25:03.778 08:21:36 -- setup/common.sh@20 -- # local mem_f mem 00:25:03.778 08:21:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:25:03.778 08:21:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:25:03.778 08:21:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:25:03.778 08:21:36 -- setup/common.sh@28 -- # mapfile -t mem 00:25:03.778 08:21:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:25:03.778 08:21:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8618228 kB' 'MemAvailable: 10547760 kB' 'Buffers: 2436 kB' 'Cached: 2139568 kB' 'SwapCached: 0 kB' 'Active: 888792 kB' 'Inactive: 1372336 kB' 'Active(anon): 129588 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1372336 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120984 kB' 'Mapped: 48776 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 147340 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 77032 kB' 'KernelStack: 6368 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 356972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.778 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.778 08:21:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.779 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.779 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.779 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.779 08:21:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.779 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.779 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.779 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.779 08:21:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.779 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.779 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.779 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.779 08:21:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.779 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.779 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.779 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.779 08:21:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.779 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.779 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.779 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.779 08:21:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.779 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.779 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.779 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.779 08:21:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.779 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.779 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.779 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.779 08:21:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.779 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.779 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.779 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.779 08:21:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.779 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.779 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.779 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.779 08:21:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.779 08:21:36 -- setup/common.sh@32 -- # continue 00:25:03.779 08:21:36 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.779 08:21:36 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.779 08:21:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:03.779 08:21:37 -- setup/common.sh@33 -- # echo 512 00:25:03.779 08:21:37 -- setup/common.sh@33 -- # return 0 00:25:03.779 08:21:37 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:25:03.779 08:21:37 -- setup/hugepages.sh@112 -- # get_nodes 00:25:03.779 08:21:37 -- setup/hugepages.sh@27 -- # local node 00:25:03.779 08:21:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:25:03.779 08:21:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:25:03.779 08:21:37 -- setup/hugepages.sh@32 -- # no_nodes=1 00:25:03.779 08:21:37 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:25:03.779 08:21:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:25:03.779 08:21:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:25:03.779 08:21:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:25:03.779 08:21:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:25:03.779 08:21:37 -- setup/common.sh@18 -- # local node=0 00:25:03.779 08:21:37 -- setup/common.sh@19 -- # local var val 00:25:03.779 08:21:37 -- setup/common.sh@20 -- # local mem_f mem 00:25:03.779 08:21:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:25:03.779 08:21:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:25:03.779 08:21:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:25:03.779 08:21:37 -- setup/common.sh@28 -- # mapfile -t mem 00:25:03.779 08:21:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.779 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.779 08:21:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8618228 kB' 'MemUsed: 3623732 kB' 'SwapCached: 0 kB' 'Active: 889252 kB' 'Inactive: 1372336 kB' 'Active(anon): 130048 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1372336 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'FilePages: 2142004 kB' 'Mapped: 48776 kB' 'AnonPages: 121180 kB' 'Shmem: 10464 kB' 'KernelStack: 6368 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70308 kB' 'Slab: 147340 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 77032 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # continue 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:03.780 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:03.780 08:21:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:03.780 08:21:37 -- setup/common.sh@33 -- # echo 0 00:25:03.780 08:21:37 -- setup/common.sh@33 -- # return 0 00:25:03.780 08:21:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:25:03.780 08:21:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:25:03.780 08:21:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:25:03.780 08:21:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:25:03.780 node0=512 expecting 512 00:25:03.780 08:21:37 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:25:03.780 08:21:37 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:25:03.780 00:25:03.780 real 0m0.691s 00:25:03.780 user 0m0.309s 00:25:03.780 sys 0m0.426s 00:25:03.780 08:21:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:03.780 08:21:37 -- common/autotest_common.sh@10 -- # set +x 00:25:03.780 ************************************ 00:25:03.780 END TEST custom_alloc 00:25:03.780 ************************************ 00:25:03.780 08:21:37 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:25:03.780 08:21:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:03.780 08:21:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:03.781 08:21:37 -- common/autotest_common.sh@10 -- # set +x 00:25:03.781 ************************************ 00:25:03.781 START TEST no_shrink_alloc 00:25:03.781 ************************************ 00:25:03.781 08:21:37 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:25:03.781 08:21:37 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:25:03.781 08:21:37 -- setup/hugepages.sh@49 -- # local size=2097152 00:25:03.781 08:21:37 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:25:03.781 08:21:37 -- setup/hugepages.sh@51 -- # shift 00:25:03.781 08:21:37 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:25:03.781 08:21:37 -- setup/hugepages.sh@52 -- # local node_ids 00:25:03.781 08:21:37 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:25:03.781 08:21:37 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:25:03.781 08:21:37 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:25:03.781 08:21:37 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:25:03.781 08:21:37 -- setup/hugepages.sh@62 -- # local user_nodes 00:25:03.781 08:21:37 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:25:03.781 08:21:37 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:25:03.781 08:21:37 -- setup/hugepages.sh@67 -- # nodes_test=() 00:25:03.781 08:21:37 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:25:03.781 08:21:37 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:25:03.781 08:21:37 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:25:03.781 08:21:37 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:25:03.781 08:21:37 -- setup/hugepages.sh@73 -- # return 0 00:25:03.781 08:21:37 -- setup/hugepages.sh@198 -- # setup output 00:25:03.781 08:21:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:25:03.781 08:21:37 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:04.349 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:04.349 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:04.349 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:04.349 08:21:37 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:25:04.349 08:21:37 -- setup/hugepages.sh@89 -- # local node 00:25:04.349 08:21:37 -- setup/hugepages.sh@90 -- # local sorted_t 00:25:04.349 08:21:37 -- setup/hugepages.sh@91 -- # local sorted_s 00:25:04.349 08:21:37 -- setup/hugepages.sh@92 -- # local surp 00:25:04.349 08:21:37 -- setup/hugepages.sh@93 -- # local resv 00:25:04.349 08:21:37 -- setup/hugepages.sh@94 -- # local anon 00:25:04.349 08:21:37 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:25:04.349 08:21:37 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:25:04.349 08:21:37 -- setup/common.sh@17 -- # local get=AnonHugePages 00:25:04.349 08:21:37 -- setup/common.sh@18 -- # local node= 00:25:04.349 08:21:37 -- setup/common.sh@19 -- # local var val 00:25:04.349 08:21:37 -- setup/common.sh@20 -- # local mem_f mem 00:25:04.349 08:21:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:25:04.349 08:21:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:25:04.349 08:21:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:25:04.349 08:21:37 -- setup/common.sh@28 -- # mapfile -t mem 00:25:04.349 08:21:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.349 08:21:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7572724 kB' 'MemAvailable: 9502256 kB' 'Buffers: 2436 kB' 'Cached: 2139568 kB' 'SwapCached: 0 kB' 'Active: 889168 kB' 'Inactive: 1372336 kB' 'Active(anon): 129964 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1372336 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 121116 kB' 'Mapped: 48872 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 147340 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 77032 kB' 'KernelStack: 6392 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 357104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.349 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.349 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:04.350 08:21:37 -- setup/common.sh@33 -- # echo 0 00:25:04.350 08:21:37 -- setup/common.sh@33 -- # return 0 00:25:04.350 08:21:37 -- setup/hugepages.sh@97 -- # anon=0 00:25:04.350 08:21:37 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:25:04.350 08:21:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:25:04.350 08:21:37 -- setup/common.sh@18 -- # local node= 00:25:04.350 08:21:37 -- setup/common.sh@19 -- # local var val 00:25:04.350 08:21:37 -- setup/common.sh@20 -- # local mem_f mem 00:25:04.350 08:21:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:25:04.350 08:21:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:25:04.350 08:21:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:25:04.350 08:21:37 -- setup/common.sh@28 -- # mapfile -t mem 00:25:04.350 08:21:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:25:04.350 08:21:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7572908 kB' 'MemAvailable: 9502440 kB' 'Buffers: 2436 kB' 'Cached: 2139568 kB' 'SwapCached: 0 kB' 'Active: 888884 kB' 'Inactive: 1372336 kB' 'Active(anon): 129680 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1372336 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 121100 kB' 'Mapped: 48776 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 147336 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 77028 kB' 'KernelStack: 6384 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 357104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.350 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.350 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.351 08:21:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.351 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.351 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.351 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.351 08:21:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.351 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.351 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.351 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.351 08:21:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.351 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.351 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.351 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.351 08:21:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.351 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.351 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.351 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.351 08:21:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.351 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.351 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.351 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.351 08:21:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.351 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.351 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.351 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.351 08:21:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.351 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.351 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.351 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.351 08:21:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.351 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.351 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.351 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.351 08:21:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.351 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.351 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.351 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.612 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.612 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.613 08:21:37 -- setup/common.sh@33 -- # echo 0 00:25:04.613 08:21:37 -- setup/common.sh@33 -- # return 0 00:25:04.613 08:21:37 -- setup/hugepages.sh@99 -- # surp=0 00:25:04.613 08:21:37 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:25:04.613 08:21:37 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:25:04.613 08:21:37 -- setup/common.sh@18 -- # local node= 00:25:04.613 08:21:37 -- setup/common.sh@19 -- # local var val 00:25:04.613 08:21:37 -- setup/common.sh@20 -- # local mem_f mem 00:25:04.613 08:21:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:25:04.613 08:21:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:25:04.613 08:21:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:25:04.613 08:21:37 -- setup/common.sh@28 -- # mapfile -t mem 00:25:04.613 08:21:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:25:04.613 08:21:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7572908 kB' 'MemAvailable: 9502440 kB' 'Buffers: 2436 kB' 'Cached: 2139568 kB' 'SwapCached: 0 kB' 'Active: 888824 kB' 'Inactive: 1372336 kB' 'Active(anon): 129620 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1372336 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 121008 kB' 'Mapped: 48776 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 147336 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 77028 kB' 'KernelStack: 6368 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 357104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55016 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.613 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.613 08:21:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:04.614 08:21:37 -- setup/common.sh@33 -- # echo 0 00:25:04.614 08:21:37 -- setup/common.sh@33 -- # return 0 00:25:04.614 08:21:37 -- setup/hugepages.sh@100 -- # resv=0 00:25:04.614 nr_hugepages=1024 00:25:04.614 08:21:37 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:25:04.614 resv_hugepages=0 00:25:04.614 08:21:37 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:25:04.614 surplus_hugepages=0 00:25:04.614 08:21:37 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:25:04.614 anon_hugepages=0 00:25:04.614 08:21:37 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:25:04.614 08:21:37 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:25:04.614 08:21:37 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:25:04.614 08:21:37 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:25:04.614 08:21:37 -- setup/common.sh@17 -- # local get=HugePages_Total 00:25:04.614 08:21:37 -- setup/common.sh@18 -- # local node= 00:25:04.614 08:21:37 -- setup/common.sh@19 -- # local var val 00:25:04.614 08:21:37 -- setup/common.sh@20 -- # local mem_f mem 00:25:04.614 08:21:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:25:04.614 08:21:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:25:04.614 08:21:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:25:04.614 08:21:37 -- setup/common.sh@28 -- # mapfile -t mem 00:25:04.614 08:21:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.614 08:21:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7574880 kB' 'MemAvailable: 9504412 kB' 'Buffers: 2436 kB' 'Cached: 2139568 kB' 'SwapCached: 0 kB' 'Active: 883872 kB' 'Inactive: 1372336 kB' 'Active(anon): 124668 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1372336 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 116056 kB' 'Mapped: 48136 kB' 'Shmem: 10464 kB' 'KReclaimable: 70308 kB' 'Slab: 147260 kB' 'SReclaimable: 70308 kB' 'SUnreclaim: 76952 kB' 'KernelStack: 6288 kB' 'PageTables: 3992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 335056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54920 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.614 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.614 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.615 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.615 08:21:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:04.615 08:21:37 -- setup/common.sh@33 -- # echo 1024 00:25:04.615 08:21:37 -- setup/common.sh@33 -- # return 0 00:25:04.615 08:21:37 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:25:04.615 08:21:37 -- setup/hugepages.sh@112 -- # get_nodes 00:25:04.615 08:21:37 -- setup/hugepages.sh@27 -- # local node 00:25:04.615 08:21:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:25:04.615 08:21:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:25:04.615 08:21:37 -- setup/hugepages.sh@32 -- # no_nodes=1 00:25:04.615 08:21:37 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:25:04.615 08:21:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:25:04.615 08:21:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:25:04.615 08:21:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:25:04.615 08:21:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:25:04.616 08:21:37 -- setup/common.sh@18 -- # local node=0 00:25:04.616 08:21:37 -- setup/common.sh@19 -- # local var val 00:25:04.616 08:21:37 -- setup/common.sh@20 -- # local mem_f mem 00:25:04.616 08:21:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:25:04.616 08:21:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:25:04.616 08:21:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:25:04.616 08:21:37 -- setup/common.sh@28 -- # mapfile -t mem 00:25:04.616 08:21:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.616 08:21:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7575084 kB' 'MemUsed: 4666876 kB' 'SwapCached: 0 kB' 'Active: 883532 kB' 'Inactive: 1372336 kB' 'Active(anon): 124328 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1372336 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'FilePages: 2142004 kB' 'Mapped: 48036 kB' 'AnonPages: 115496 kB' 'Shmem: 10464 kB' 'KernelStack: 6256 kB' 'PageTables: 3848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70304 kB' 'Slab: 147232 kB' 'SReclaimable: 70304 kB' 'SUnreclaim: 76928 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.616 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.616 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.617 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.617 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.617 08:21:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.617 08:21:37 -- setup/common.sh@32 -- # continue 00:25:04.617 08:21:37 -- setup/common.sh@31 -- # IFS=': ' 00:25:04.617 08:21:37 -- setup/common.sh@31 -- # read -r var val _ 00:25:04.617 08:21:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:04.617 08:21:37 -- setup/common.sh@33 -- # echo 0 00:25:04.617 08:21:37 -- setup/common.sh@33 -- # return 0 00:25:04.617 08:21:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:25:04.617 08:21:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:25:04.617 08:21:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:25:04.617 08:21:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:25:04.617 node0=1024 expecting 1024 00:25:04.617 08:21:37 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:25:04.617 08:21:37 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:25:04.617 08:21:37 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:25:04.617 08:21:37 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:25:04.617 08:21:37 -- setup/hugepages.sh@202 -- # setup output 00:25:04.617 08:21:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:25:04.617 08:21:37 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:04.875 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:05.137 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:05.137 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:05.137 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:25:05.137 08:21:38 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:25:05.137 08:21:38 -- setup/hugepages.sh@89 -- # local node 00:25:05.137 08:21:38 -- setup/hugepages.sh@90 -- # local sorted_t 00:25:05.137 08:21:38 -- setup/hugepages.sh@91 -- # local sorted_s 00:25:05.137 08:21:38 -- setup/hugepages.sh@92 -- # local surp 00:25:05.137 08:21:38 -- setup/hugepages.sh@93 -- # local resv 00:25:05.137 08:21:38 -- setup/hugepages.sh@94 -- # local anon 00:25:05.137 08:21:38 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:25:05.137 08:21:38 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:25:05.137 08:21:38 -- setup/common.sh@17 -- # local get=AnonHugePages 00:25:05.137 08:21:38 -- setup/common.sh@18 -- # local node= 00:25:05.137 08:21:38 -- setup/common.sh@19 -- # local var val 00:25:05.137 08:21:38 -- setup/common.sh@20 -- # local mem_f mem 00:25:05.137 08:21:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:25:05.137 08:21:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:25:05.137 08:21:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:25:05.137 08:21:38 -- setup/common.sh@28 -- # mapfile -t mem 00:25:05.137 08:21:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.137 08:21:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7575492 kB' 'MemAvailable: 9505020 kB' 'Buffers: 2436 kB' 'Cached: 2139568 kB' 'SwapCached: 0 kB' 'Active: 883920 kB' 'Inactive: 1372336 kB' 'Active(anon): 124716 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1372336 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 116080 kB' 'Mapped: 48252 kB' 'Shmem: 10464 kB' 'KReclaimable: 70304 kB' 'Slab: 147120 kB' 'SReclaimable: 70304 kB' 'SUnreclaim: 76816 kB' 'KernelStack: 6272 kB' 'PageTables: 3904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 335056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54904 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.137 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.137 08:21:38 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:25:05.138 08:21:38 -- setup/common.sh@33 -- # echo 0 00:25:05.138 08:21:38 -- setup/common.sh@33 -- # return 0 00:25:05.138 08:21:38 -- setup/hugepages.sh@97 -- # anon=0 00:25:05.138 08:21:38 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:25:05.138 08:21:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:25:05.138 08:21:38 -- setup/common.sh@18 -- # local node= 00:25:05.138 08:21:38 -- setup/common.sh@19 -- # local var val 00:25:05.138 08:21:38 -- setup/common.sh@20 -- # local mem_f mem 00:25:05.138 08:21:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:25:05.138 08:21:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:25:05.138 08:21:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:25:05.138 08:21:38 -- setup/common.sh@28 -- # mapfile -t mem 00:25:05.138 08:21:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.138 08:21:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7575492 kB' 'MemAvailable: 9505020 kB' 'Buffers: 2436 kB' 'Cached: 2139568 kB' 'SwapCached: 0 kB' 'Active: 883488 kB' 'Inactive: 1372336 kB' 'Active(anon): 124284 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1372336 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 115652 kB' 'Mapped: 48036 kB' 'Shmem: 10464 kB' 'KReclaimable: 70304 kB' 'Slab: 147120 kB' 'SReclaimable: 70304 kB' 'SUnreclaim: 76816 kB' 'KernelStack: 6240 kB' 'PageTables: 3796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 335056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54888 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.138 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.138 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.139 08:21:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.139 08:21:38 -- setup/common.sh@33 -- # echo 0 00:25:05.139 08:21:38 -- setup/common.sh@33 -- # return 0 00:25:05.139 08:21:38 -- setup/hugepages.sh@99 -- # surp=0 00:25:05.139 08:21:38 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:25:05.139 08:21:38 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:25:05.139 08:21:38 -- setup/common.sh@18 -- # local node= 00:25:05.139 08:21:38 -- setup/common.sh@19 -- # local var val 00:25:05.139 08:21:38 -- setup/common.sh@20 -- # local mem_f mem 00:25:05.139 08:21:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:25:05.139 08:21:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:25:05.139 08:21:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:25:05.139 08:21:38 -- setup/common.sh@28 -- # mapfile -t mem 00:25:05.139 08:21:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.139 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.140 08:21:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7575492 kB' 'MemAvailable: 9505020 kB' 'Buffers: 2436 kB' 'Cached: 2139568 kB' 'SwapCached: 0 kB' 'Active: 883604 kB' 'Inactive: 1372336 kB' 'Active(anon): 124400 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1372336 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 115772 kB' 'Mapped: 48036 kB' 'Shmem: 10464 kB' 'KReclaimable: 70304 kB' 'Slab: 147120 kB' 'SReclaimable: 70304 kB' 'SUnreclaim: 76816 kB' 'KernelStack: 6240 kB' 'PageTables: 3796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 335056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54888 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.140 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.140 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:25:05.141 08:21:38 -- setup/common.sh@33 -- # echo 0 00:25:05.141 08:21:38 -- setup/common.sh@33 -- # return 0 00:25:05.141 08:21:38 -- setup/hugepages.sh@100 -- # resv=0 00:25:05.141 nr_hugepages=1024 00:25:05.141 08:21:38 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:25:05.141 resv_hugepages=0 00:25:05.141 08:21:38 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:25:05.141 surplus_hugepages=0 00:25:05.141 08:21:38 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:25:05.141 anon_hugepages=0 00:25:05.141 08:21:38 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:25:05.141 08:21:38 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:25:05.141 08:21:38 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:25:05.141 08:21:38 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:25:05.141 08:21:38 -- setup/common.sh@17 -- # local get=HugePages_Total 00:25:05.141 08:21:38 -- setup/common.sh@18 -- # local node= 00:25:05.141 08:21:38 -- setup/common.sh@19 -- # local var val 00:25:05.141 08:21:38 -- setup/common.sh@20 -- # local mem_f mem 00:25:05.141 08:21:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:25:05.141 08:21:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:25:05.141 08:21:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:25:05.141 08:21:38 -- setup/common.sh@28 -- # mapfile -t mem 00:25:05.141 08:21:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.141 08:21:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7575492 kB' 'MemAvailable: 9505020 kB' 'Buffers: 2436 kB' 'Cached: 2139568 kB' 'SwapCached: 0 kB' 'Active: 883524 kB' 'Inactive: 1372336 kB' 'Active(anon): 124320 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1372336 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 115484 kB' 'Mapped: 48036 kB' 'Shmem: 10464 kB' 'KReclaimable: 70304 kB' 'Slab: 147116 kB' 'SReclaimable: 70304 kB' 'SUnreclaim: 76812 kB' 'KernelStack: 6256 kB' 'PageTables: 3848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 335056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54888 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.141 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.141 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.142 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.142 08:21:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:25:05.142 08:21:38 -- setup/common.sh@33 -- # echo 1024 00:25:05.143 08:21:38 -- setup/common.sh@33 -- # return 0 00:25:05.143 08:21:38 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:25:05.143 08:21:38 -- setup/hugepages.sh@112 -- # get_nodes 00:25:05.143 08:21:38 -- setup/hugepages.sh@27 -- # local node 00:25:05.143 08:21:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:25:05.143 08:21:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:25:05.143 08:21:38 -- setup/hugepages.sh@32 -- # no_nodes=1 00:25:05.143 08:21:38 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:25:05.143 08:21:38 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:25:05.143 08:21:38 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:25:05.143 08:21:38 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:25:05.143 08:21:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:25:05.143 08:21:38 -- setup/common.sh@18 -- # local node=0 00:25:05.143 08:21:38 -- setup/common.sh@19 -- # local var val 00:25:05.143 08:21:38 -- setup/common.sh@20 -- # local mem_f mem 00:25:05.143 08:21:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:25:05.143 08:21:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:25:05.143 08:21:38 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:25:05.143 08:21:38 -- setup/common.sh@28 -- # mapfile -t mem 00:25:05.143 08:21:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.143 08:21:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7575492 kB' 'MemUsed: 4666468 kB' 'SwapCached: 0 kB' 'Active: 883492 kB' 'Inactive: 1372336 kB' 'Active(anon): 124288 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1372336 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'FilePages: 2142004 kB' 'Mapped: 48036 kB' 'AnonPages: 115712 kB' 'Shmem: 10464 kB' 'KernelStack: 6240 kB' 'PageTables: 3796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70304 kB' 'Slab: 147116 kB' 'SReclaimable: 70304 kB' 'SUnreclaim: 76812 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.143 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.143 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.144 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.144 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.144 08:21:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.144 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.144 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.144 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.144 08:21:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.144 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.144 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.144 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.144 08:21:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.144 08:21:38 -- setup/common.sh@32 -- # continue 00:25:05.144 08:21:38 -- setup/common.sh@31 -- # IFS=': ' 00:25:05.144 08:21:38 -- setup/common.sh@31 -- # read -r var val _ 00:25:05.144 08:21:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:25:05.144 08:21:38 -- setup/common.sh@33 -- # echo 0 00:25:05.144 08:21:38 -- setup/common.sh@33 -- # return 0 00:25:05.144 08:21:38 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:25:05.144 08:21:38 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:25:05.144 08:21:38 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:25:05.144 08:21:38 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:25:05.144 node0=1024 expecting 1024 00:25:05.144 08:21:38 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:25:05.144 08:21:38 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:25:05.144 00:25:05.144 real 0m1.341s 00:25:05.144 user 0m0.578s 00:25:05.144 sys 0m0.843s 00:25:05.144 08:21:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:05.144 08:21:38 -- common/autotest_common.sh@10 -- # set +x 00:25:05.144 ************************************ 00:25:05.144 END TEST no_shrink_alloc 00:25:05.144 ************************************ 00:25:05.402 08:21:38 -- setup/hugepages.sh@217 -- # clear_hp 00:25:05.402 08:21:38 -- setup/hugepages.sh@37 -- # local node hp 00:25:05.402 08:21:38 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:25:05.402 08:21:38 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:25:05.402 08:21:38 -- setup/hugepages.sh@41 -- # echo 0 00:25:05.402 08:21:38 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:25:05.402 08:21:38 -- setup/hugepages.sh@41 -- # echo 0 00:25:05.402 08:21:38 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:25:05.402 08:21:38 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:25:05.402 00:25:05.402 real 0m5.613s 00:25:05.402 user 0m2.465s 00:25:05.402 sys 0m3.373s 00:25:05.402 08:21:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:05.402 08:21:38 -- common/autotest_common.sh@10 -- # set +x 00:25:05.402 ************************************ 00:25:05.402 END TEST hugepages 00:25:05.402 ************************************ 00:25:05.402 08:21:38 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:25:05.402 08:21:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:05.402 08:21:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:05.402 08:21:38 -- common/autotest_common.sh@10 -- # set +x 00:25:05.402 ************************************ 00:25:05.402 START TEST driver 00:25:05.402 ************************************ 00:25:05.402 08:21:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:25:05.402 * Looking for test storage... 00:25:05.402 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:25:05.402 08:21:38 -- setup/driver.sh@68 -- # setup reset 00:25:05.402 08:21:38 -- setup/common.sh@9 -- # [[ reset == output ]] 00:25:05.402 08:21:38 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:06.359 08:21:39 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:25:06.359 08:21:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:06.359 08:21:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:06.359 08:21:39 -- common/autotest_common.sh@10 -- # set +x 00:25:06.359 ************************************ 00:25:06.359 START TEST guess_driver 00:25:06.359 ************************************ 00:25:06.359 08:21:39 -- common/autotest_common.sh@1104 -- # guess_driver 00:25:06.359 08:21:39 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:25:06.359 08:21:39 -- setup/driver.sh@47 -- # local fail=0 00:25:06.359 08:21:39 -- setup/driver.sh@49 -- # pick_driver 00:25:06.359 08:21:39 -- setup/driver.sh@36 -- # vfio 00:25:06.359 08:21:39 -- setup/driver.sh@21 -- # local iommu_grups 00:25:06.359 08:21:39 -- setup/driver.sh@22 -- # local unsafe_vfio 00:25:06.359 08:21:39 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:25:06.359 08:21:39 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:25:06.359 08:21:39 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:25:06.359 08:21:39 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:25:06.359 08:21:39 -- setup/driver.sh@32 -- # return 1 00:25:06.359 08:21:39 -- setup/driver.sh@38 -- # uio 00:25:06.359 08:21:39 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:25:06.359 08:21:39 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:25:06.359 08:21:39 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:25:06.359 08:21:39 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:25:06.359 08:21:39 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:25:06.359 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:25:06.359 08:21:39 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:25:06.359 08:21:39 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:25:06.359 08:21:39 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:25:06.359 08:21:39 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:25:06.359 Looking for driver=uio_pci_generic 00:25:06.359 08:21:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:25:06.359 08:21:39 -- setup/driver.sh@45 -- # setup output config 00:25:06.359 08:21:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:25:06.359 08:21:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:25:06.927 08:21:40 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:25:06.927 08:21:40 -- setup/driver.sh@58 -- # continue 00:25:06.927 08:21:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:25:07.185 08:21:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:25:07.185 08:21:40 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:25:07.185 08:21:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:25:07.185 08:21:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:25:07.185 08:21:40 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:25:07.185 08:21:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:25:07.185 08:21:40 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:25:07.185 08:21:40 -- setup/driver.sh@65 -- # setup reset 00:25:07.185 08:21:40 -- setup/common.sh@9 -- # [[ reset == output ]] 00:25:07.185 08:21:40 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:08.120 00:25:08.120 real 0m1.745s 00:25:08.120 user 0m0.605s 00:25:08.120 sys 0m1.211s 00:25:08.120 08:21:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:08.120 08:21:41 -- common/autotest_common.sh@10 -- # set +x 00:25:08.120 ************************************ 00:25:08.120 END TEST guess_driver 00:25:08.120 ************************************ 00:25:08.120 00:25:08.120 real 0m2.581s 00:25:08.120 user 0m0.880s 00:25:08.120 sys 0m1.866s 00:25:08.120 08:21:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:08.120 08:21:41 -- common/autotest_common.sh@10 -- # set +x 00:25:08.120 ************************************ 00:25:08.120 END TEST driver 00:25:08.120 ************************************ 00:25:08.120 08:21:41 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:25:08.120 08:21:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:08.120 08:21:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:08.120 08:21:41 -- common/autotest_common.sh@10 -- # set +x 00:25:08.120 ************************************ 00:25:08.120 START TEST devices 00:25:08.120 ************************************ 00:25:08.120 08:21:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:25:08.120 * Looking for test storage... 00:25:08.120 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:25:08.120 08:21:41 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:25:08.120 08:21:41 -- setup/devices.sh@192 -- # setup reset 00:25:08.120 08:21:41 -- setup/common.sh@9 -- # [[ reset == output ]] 00:25:08.120 08:21:41 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:09.058 08:21:42 -- setup/devices.sh@194 -- # get_zoned_devs 00:25:09.058 08:21:42 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:25:09.058 08:21:42 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:25:09.058 08:21:42 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:25:09.058 08:21:42 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:25:09.058 08:21:42 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:25:09.058 08:21:42 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:25:09.058 08:21:42 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:09.058 08:21:42 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:25:09.058 08:21:42 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:25:09.058 08:21:42 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:25:09.058 08:21:42 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:25:09.058 08:21:42 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:25:09.058 08:21:42 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:25:09.058 08:21:42 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:25:09.058 08:21:42 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:25:09.058 08:21:42 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:25:09.058 08:21:42 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:25:09.058 08:21:42 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:25:09.058 08:21:42 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:25:09.058 08:21:42 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:25:09.058 08:21:42 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:25:09.058 08:21:42 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:25:09.058 08:21:42 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:25:09.058 08:21:42 -- setup/devices.sh@196 -- # blocks=() 00:25:09.058 08:21:42 -- setup/devices.sh@196 -- # declare -a blocks 00:25:09.058 08:21:42 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:25:09.058 08:21:42 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:25:09.058 08:21:42 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:25:09.058 08:21:42 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:25:09.058 08:21:42 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:25:09.058 08:21:42 -- setup/devices.sh@201 -- # ctrl=nvme0 00:25:09.058 08:21:42 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:25:09.058 08:21:42 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:25:09.058 08:21:42 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:25:09.058 08:21:42 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:25:09.058 08:21:42 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:25:09.058 No valid GPT data, bailing 00:25:09.058 08:21:42 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:09.058 08:21:42 -- scripts/common.sh@393 -- # pt= 00:25:09.058 08:21:42 -- scripts/common.sh@394 -- # return 1 00:25:09.058 08:21:42 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:25:09.058 08:21:42 -- setup/common.sh@76 -- # local dev=nvme0n1 00:25:09.058 08:21:42 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:09.058 08:21:42 -- setup/common.sh@80 -- # echo 5368709120 00:25:09.058 08:21:42 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:25:09.058 08:21:42 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:25:09.058 08:21:42 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:25:09.058 08:21:42 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:25:09.058 08:21:42 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:25:09.058 08:21:42 -- setup/devices.sh@201 -- # ctrl=nvme1 00:25:09.058 08:21:42 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:25:09.058 08:21:42 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:25:09.058 08:21:42 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:25:09.058 08:21:42 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:25:09.058 08:21:42 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:25:09.058 No valid GPT data, bailing 00:25:09.058 08:21:42 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:09.316 08:21:42 -- scripts/common.sh@393 -- # pt= 00:25:09.316 08:21:42 -- scripts/common.sh@394 -- # return 1 00:25:09.316 08:21:42 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:25:09.316 08:21:42 -- setup/common.sh@76 -- # local dev=nvme1n1 00:25:09.316 08:21:42 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:25:09.316 08:21:42 -- setup/common.sh@80 -- # echo 4294967296 00:25:09.316 08:21:42 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:25:09.316 08:21:42 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:25:09.316 08:21:42 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:25:09.316 08:21:42 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:25:09.316 08:21:42 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:25:09.316 08:21:42 -- setup/devices.sh@201 -- # ctrl=nvme1 00:25:09.316 08:21:42 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:25:09.316 08:21:42 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:25:09.316 08:21:42 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:25:09.316 08:21:42 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:25:09.316 08:21:42 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:25:09.316 No valid GPT data, bailing 00:25:09.316 08:21:42 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:25:09.316 08:21:42 -- scripts/common.sh@393 -- # pt= 00:25:09.316 08:21:42 -- scripts/common.sh@394 -- # return 1 00:25:09.316 08:21:42 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:25:09.317 08:21:42 -- setup/common.sh@76 -- # local dev=nvme1n2 00:25:09.317 08:21:42 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:25:09.317 08:21:42 -- setup/common.sh@80 -- # echo 4294967296 00:25:09.317 08:21:42 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:25:09.317 08:21:42 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:25:09.317 08:21:42 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:25:09.317 08:21:42 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:25:09.317 08:21:42 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:25:09.317 08:21:42 -- setup/devices.sh@201 -- # ctrl=nvme1 00:25:09.317 08:21:42 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:25:09.317 08:21:42 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:25:09.317 08:21:42 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:25:09.317 08:21:42 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:25:09.317 08:21:42 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:25:09.317 No valid GPT data, bailing 00:25:09.317 08:21:42 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:25:09.317 08:21:42 -- scripts/common.sh@393 -- # pt= 00:25:09.317 08:21:42 -- scripts/common.sh@394 -- # return 1 00:25:09.317 08:21:42 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:25:09.317 08:21:42 -- setup/common.sh@76 -- # local dev=nvme1n3 00:25:09.317 08:21:42 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:25:09.317 08:21:42 -- setup/common.sh@80 -- # echo 4294967296 00:25:09.317 08:21:42 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:25:09.317 08:21:42 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:25:09.317 08:21:42 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:25:09.317 08:21:42 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:25:09.317 08:21:42 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:25:09.317 08:21:42 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:25:09.317 08:21:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:09.317 08:21:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:09.317 08:21:42 -- common/autotest_common.sh@10 -- # set +x 00:25:09.317 ************************************ 00:25:09.317 START TEST nvme_mount 00:25:09.317 ************************************ 00:25:09.317 08:21:42 -- common/autotest_common.sh@1104 -- # nvme_mount 00:25:09.317 08:21:42 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:25:09.317 08:21:42 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:25:09.317 08:21:42 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:25:09.317 08:21:42 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:25:09.317 08:21:42 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:25:09.317 08:21:42 -- setup/common.sh@39 -- # local disk=nvme0n1 00:25:09.317 08:21:42 -- setup/common.sh@40 -- # local part_no=1 00:25:09.317 08:21:42 -- setup/common.sh@41 -- # local size=1073741824 00:25:09.317 08:21:42 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:25:09.317 08:21:42 -- setup/common.sh@44 -- # parts=() 00:25:09.317 08:21:42 -- setup/common.sh@44 -- # local parts 00:25:09.317 08:21:42 -- setup/common.sh@46 -- # (( part = 1 )) 00:25:09.317 08:21:42 -- setup/common.sh@46 -- # (( part <= part_no )) 00:25:09.317 08:21:42 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:25:09.317 08:21:42 -- setup/common.sh@46 -- # (( part++ )) 00:25:09.317 08:21:42 -- setup/common.sh@46 -- # (( part <= part_no )) 00:25:09.317 08:21:42 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:25:09.317 08:21:42 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:25:09.317 08:21:42 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:25:10.254 Creating new GPT entries in memory. 00:25:10.254 GPT data structures destroyed! You may now partition the disk using fdisk or 00:25:10.254 other utilities. 00:25:10.254 08:21:43 -- setup/common.sh@57 -- # (( part = 1 )) 00:25:10.532 08:21:43 -- setup/common.sh@57 -- # (( part <= part_no )) 00:25:10.532 08:21:43 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:25:10.532 08:21:43 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:25:10.532 08:21:43 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:25:11.479 Creating new GPT entries in memory. 00:25:11.479 The operation has completed successfully. 00:25:11.479 08:21:44 -- setup/common.sh@57 -- # (( part++ )) 00:25:11.479 08:21:44 -- setup/common.sh@57 -- # (( part <= part_no )) 00:25:11.479 08:21:44 -- setup/common.sh@62 -- # wait 54013 00:25:11.479 08:21:44 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:25:11.479 08:21:44 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:25:11.479 08:21:44 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:25:11.479 08:21:44 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:25:11.479 08:21:44 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:25:11.479 08:21:44 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:25:11.479 08:21:44 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:25:11.479 08:21:44 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:25:11.479 08:21:44 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:25:11.479 08:21:44 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:25:11.479 08:21:44 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:25:11.479 08:21:44 -- setup/devices.sh@53 -- # local found=0 00:25:11.479 08:21:44 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:25:11.479 08:21:44 -- setup/devices.sh@56 -- # : 00:25:11.479 08:21:44 -- setup/devices.sh@59 -- # local pci status 00:25:11.479 08:21:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:25:11.479 08:21:44 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:25:11.479 08:21:44 -- setup/devices.sh@47 -- # setup output config 00:25:11.479 08:21:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:25:11.479 08:21:44 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:25:11.738 08:21:44 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:25:11.738 08:21:44 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:25:11.738 08:21:44 -- setup/devices.sh@63 -- # found=1 00:25:11.738 08:21:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:25:11.738 08:21:44 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:25:11.738 08:21:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:25:11.997 08:21:45 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:25:11.997 08:21:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:25:12.257 08:21:45 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:25:12.257 08:21:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:25:12.257 08:21:45 -- setup/devices.sh@66 -- # (( found == 1 )) 00:25:12.257 08:21:45 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:25:12.257 08:21:45 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:25:12.257 08:21:45 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:25:12.257 08:21:45 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:25:12.257 08:21:45 -- setup/devices.sh@110 -- # cleanup_nvme 00:25:12.257 08:21:45 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:25:12.257 08:21:45 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:25:12.257 08:21:45 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:25:12.257 08:21:45 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:25:12.257 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:25:12.257 08:21:45 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:25:12.257 08:21:45 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:25:12.517 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:25:12.517 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:25:12.517 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:25:12.517 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:25:12.517 08:21:45 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:25:12.517 08:21:45 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:25:12.517 08:21:45 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:25:12.517 08:21:45 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:25:12.517 08:21:45 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:25:12.517 08:21:45 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:25:12.517 08:21:45 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:25:12.517 08:21:45 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:25:12.517 08:21:45 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:25:12.517 08:21:45 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:25:12.817 08:21:45 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:25:12.817 08:21:45 -- setup/devices.sh@53 -- # local found=0 00:25:12.817 08:21:45 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:25:12.817 08:21:45 -- setup/devices.sh@56 -- # : 00:25:12.817 08:21:45 -- setup/devices.sh@59 -- # local pci status 00:25:12.817 08:21:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:25:12.817 08:21:45 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:25:12.817 08:21:45 -- setup/devices.sh@47 -- # setup output config 00:25:12.817 08:21:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:25:12.817 08:21:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:25:12.817 08:21:46 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:25:12.817 08:21:46 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:25:12.817 08:21:46 -- setup/devices.sh@63 -- # found=1 00:25:12.817 08:21:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:25:12.817 08:21:46 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:25:12.817 08:21:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:25:13.076 08:21:46 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:25:13.076 08:21:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:25:13.334 08:21:46 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:25:13.334 08:21:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:25:13.334 08:21:46 -- setup/devices.sh@66 -- # (( found == 1 )) 00:25:13.334 08:21:46 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:25:13.334 08:21:46 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:25:13.334 08:21:46 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:25:13.334 08:21:46 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:25:13.334 08:21:46 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:25:13.334 08:21:46 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:25:13.334 08:21:46 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:25:13.334 08:21:46 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:25:13.334 08:21:46 -- setup/devices.sh@50 -- # local mount_point= 00:25:13.334 08:21:46 -- setup/devices.sh@51 -- # local test_file= 00:25:13.334 08:21:46 -- setup/devices.sh@53 -- # local found=0 00:25:13.334 08:21:46 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:25:13.334 08:21:46 -- setup/devices.sh@59 -- # local pci status 00:25:13.334 08:21:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:25:13.334 08:21:46 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:25:13.334 08:21:46 -- setup/devices.sh@47 -- # setup output config 00:25:13.334 08:21:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:25:13.334 08:21:46 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:25:13.592 08:21:46 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:25:13.592 08:21:46 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:25:13.592 08:21:46 -- setup/devices.sh@63 -- # found=1 00:25:13.592 08:21:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:25:13.592 08:21:46 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:25:13.592 08:21:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:25:14.159 08:21:47 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:25:14.159 08:21:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:25:14.159 08:21:47 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:25:14.159 08:21:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:25:14.159 08:21:47 -- setup/devices.sh@66 -- # (( found == 1 )) 00:25:14.159 08:21:47 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:25:14.159 08:21:47 -- setup/devices.sh@68 -- # return 0 00:25:14.159 08:21:47 -- setup/devices.sh@128 -- # cleanup_nvme 00:25:14.159 08:21:47 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:25:14.159 08:21:47 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:25:14.159 08:21:47 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:25:14.159 08:21:47 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:25:14.159 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:25:14.159 00:25:14.159 real 0m4.929s 00:25:14.159 user 0m1.030s 00:25:14.159 sys 0m1.600s 00:25:14.159 08:21:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:14.159 08:21:47 -- common/autotest_common.sh@10 -- # set +x 00:25:14.159 ************************************ 00:25:14.159 END TEST nvme_mount 00:25:14.159 ************************************ 00:25:14.418 08:21:47 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:25:14.418 08:21:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:14.418 08:21:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:14.418 08:21:47 -- common/autotest_common.sh@10 -- # set +x 00:25:14.418 ************************************ 00:25:14.418 START TEST dm_mount 00:25:14.418 ************************************ 00:25:14.418 08:21:47 -- common/autotest_common.sh@1104 -- # dm_mount 00:25:14.418 08:21:47 -- setup/devices.sh@144 -- # pv=nvme0n1 00:25:14.418 08:21:47 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:25:14.418 08:21:47 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:25:14.418 08:21:47 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:25:14.418 08:21:47 -- setup/common.sh@39 -- # local disk=nvme0n1 00:25:14.418 08:21:47 -- setup/common.sh@40 -- # local part_no=2 00:25:14.418 08:21:47 -- setup/common.sh@41 -- # local size=1073741824 00:25:14.418 08:21:47 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:25:14.418 08:21:47 -- setup/common.sh@44 -- # parts=() 00:25:14.418 08:21:47 -- setup/common.sh@44 -- # local parts 00:25:14.418 08:21:47 -- setup/common.sh@46 -- # (( part = 1 )) 00:25:14.418 08:21:47 -- setup/common.sh@46 -- # (( part <= part_no )) 00:25:14.418 08:21:47 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:25:14.418 08:21:47 -- setup/common.sh@46 -- # (( part++ )) 00:25:14.418 08:21:47 -- setup/common.sh@46 -- # (( part <= part_no )) 00:25:14.418 08:21:47 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:25:14.418 08:21:47 -- setup/common.sh@46 -- # (( part++ )) 00:25:14.418 08:21:47 -- setup/common.sh@46 -- # (( part <= part_no )) 00:25:14.418 08:21:47 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:25:14.418 08:21:47 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:25:14.418 08:21:47 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:25:15.353 Creating new GPT entries in memory. 00:25:15.353 GPT data structures destroyed! You may now partition the disk using fdisk or 00:25:15.353 other utilities. 00:25:15.353 08:21:48 -- setup/common.sh@57 -- # (( part = 1 )) 00:25:15.353 08:21:48 -- setup/common.sh@57 -- # (( part <= part_no )) 00:25:15.353 08:21:48 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:25:15.353 08:21:48 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:25:15.353 08:21:48 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:25:16.726 Creating new GPT entries in memory. 00:25:16.726 The operation has completed successfully. 00:25:16.726 08:21:49 -- setup/common.sh@57 -- # (( part++ )) 00:25:16.726 08:21:49 -- setup/common.sh@57 -- # (( part <= part_no )) 00:25:16.726 08:21:49 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:25:16.726 08:21:49 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:25:16.726 08:21:49 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:25:17.662 The operation has completed successfully. 00:25:17.662 08:21:50 -- setup/common.sh@57 -- # (( part++ )) 00:25:17.662 08:21:50 -- setup/common.sh@57 -- # (( part <= part_no )) 00:25:17.662 08:21:50 -- setup/common.sh@62 -- # wait 54505 00:25:17.663 08:21:50 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:25:17.663 08:21:50 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:25:17.663 08:21:50 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:25:17.663 08:21:50 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:25:17.663 08:21:50 -- setup/devices.sh@160 -- # for t in {1..5} 00:25:17.663 08:21:50 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:25:17.663 08:21:50 -- setup/devices.sh@161 -- # break 00:25:17.663 08:21:50 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:25:17.663 08:21:50 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:25:17.663 08:21:50 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:25:17.663 08:21:50 -- setup/devices.sh@166 -- # dm=dm-0 00:25:17.663 08:21:50 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:25:17.663 08:21:50 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:25:17.663 08:21:50 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:25:17.663 08:21:50 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:25:17.663 08:21:50 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:25:17.663 08:21:50 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:25:17.663 08:21:50 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:25:17.663 08:21:50 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:25:17.663 08:21:50 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:25:17.663 08:21:50 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:25:17.663 08:21:50 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:25:17.663 08:21:50 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:25:17.663 08:21:50 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:25:17.663 08:21:50 -- setup/devices.sh@53 -- # local found=0 00:25:17.663 08:21:50 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:25:17.663 08:21:50 -- setup/devices.sh@56 -- # : 00:25:17.663 08:21:50 -- setup/devices.sh@59 -- # local pci status 00:25:17.663 08:21:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:25:17.663 08:21:50 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:25:17.663 08:21:50 -- setup/devices.sh@47 -- # setup output config 00:25:17.663 08:21:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:25:17.663 08:21:50 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:25:17.921 08:21:51 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:25:17.921 08:21:51 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:25:17.921 08:21:51 -- setup/devices.sh@63 -- # found=1 00:25:17.921 08:21:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:25:17.921 08:21:51 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:25:17.921 08:21:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:25:18.180 08:21:51 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:25:18.180 08:21:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:25:18.438 08:21:51 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:25:18.438 08:21:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:25:18.438 08:21:51 -- setup/devices.sh@66 -- # (( found == 1 )) 00:25:18.438 08:21:51 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:25:18.438 08:21:51 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:25:18.438 08:21:51 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:25:18.438 08:21:51 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:25:18.438 08:21:51 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:25:18.697 08:21:51 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:25:18.697 08:21:51 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:25:18.697 08:21:51 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:25:18.697 08:21:51 -- setup/devices.sh@50 -- # local mount_point= 00:25:18.697 08:21:51 -- setup/devices.sh@51 -- # local test_file= 00:25:18.697 08:21:51 -- setup/devices.sh@53 -- # local found=0 00:25:18.697 08:21:51 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:25:18.697 08:21:51 -- setup/devices.sh@59 -- # local pci status 00:25:18.697 08:21:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:25:18.697 08:21:51 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:25:18.697 08:21:51 -- setup/devices.sh@47 -- # setup output config 00:25:18.697 08:21:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:25:18.697 08:21:51 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:25:18.957 08:21:52 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:25:18.957 08:21:52 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:25:18.957 08:21:52 -- setup/devices.sh@63 -- # found=1 00:25:18.957 08:21:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:25:18.957 08:21:52 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:25:18.957 08:21:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:25:19.216 08:21:52 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:25:19.216 08:21:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:25:19.216 08:21:52 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:25:19.216 08:21:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:25:19.474 08:21:52 -- setup/devices.sh@66 -- # (( found == 1 )) 00:25:19.474 08:21:52 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:25:19.474 08:21:52 -- setup/devices.sh@68 -- # return 0 00:25:19.474 08:21:52 -- setup/devices.sh@187 -- # cleanup_dm 00:25:19.474 08:21:52 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:25:19.474 08:21:52 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:25:19.474 08:21:52 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:25:19.474 08:21:52 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:25:19.475 08:21:52 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:25:19.475 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:25:19.475 08:21:52 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:25:19.475 08:21:52 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:25:19.475 00:25:19.475 real 0m5.156s 00:25:19.475 user 0m0.742s 00:25:19.475 sys 0m1.201s 00:25:19.475 08:21:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:19.475 08:21:52 -- common/autotest_common.sh@10 -- # set +x 00:25:19.475 ************************************ 00:25:19.475 END TEST dm_mount 00:25:19.475 ************************************ 00:25:19.475 08:21:52 -- setup/devices.sh@1 -- # cleanup 00:25:19.475 08:21:52 -- setup/devices.sh@11 -- # cleanup_nvme 00:25:19.475 08:21:52 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:25:19.475 08:21:52 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:25:19.475 08:21:52 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:25:19.475 08:21:52 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:25:19.475 08:21:52 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:25:19.742 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:25:19.742 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:25:19.742 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:25:19.742 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:25:19.742 08:21:53 -- setup/devices.sh@12 -- # cleanup_dm 00:25:19.742 08:21:53 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:25:19.742 08:21:53 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:25:19.742 08:21:53 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:25:19.742 08:21:53 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:25:19.742 08:21:53 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:25:19.742 08:21:53 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:25:19.742 ************************************ 00:25:19.742 END TEST devices 00:25:19.742 ************************************ 00:25:19.742 00:25:19.742 real 0m11.861s 00:25:19.742 user 0m2.456s 00:25:19.742 sys 0m3.635s 00:25:19.742 08:21:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:19.742 08:21:53 -- common/autotest_common.sh@10 -- # set +x 00:25:20.001 ************************************ 00:25:20.001 END TEST setup.sh 00:25:20.001 ************************************ 00:25:20.001 00:25:20.001 real 0m25.861s 00:25:20.001 user 0m7.956s 00:25:20.001 sys 0m12.583s 00:25:20.001 08:21:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:20.001 08:21:53 -- common/autotest_common.sh@10 -- # set +x 00:25:20.001 08:21:53 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:25:20.001 Hugepages 00:25:20.001 node hugesize free / total 00:25:20.260 node0 1048576kB 0 / 0 00:25:20.260 node0 2048kB 2048 / 2048 00:25:20.260 00:25:20.260 Type BDF Vendor Device NUMA Driver Device Block devices 00:25:20.260 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:25:20.260 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:25:20.518 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:25:20.518 08:21:53 -- spdk/autotest.sh@141 -- # uname -s 00:25:20.518 08:21:53 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:25:20.518 08:21:53 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:25:20.518 08:21:53 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:21.085 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:21.343 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:25:21.343 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:25:21.343 08:21:54 -- common/autotest_common.sh@1517 -- # sleep 1 00:25:22.721 08:21:55 -- common/autotest_common.sh@1518 -- # bdfs=() 00:25:22.721 08:21:55 -- common/autotest_common.sh@1518 -- # local bdfs 00:25:22.721 08:21:55 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:25:22.721 08:21:55 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:25:22.721 08:21:55 -- common/autotest_common.sh@1498 -- # bdfs=() 00:25:22.721 08:21:55 -- common/autotest_common.sh@1498 -- # local bdfs 00:25:22.721 08:21:55 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:22.721 08:21:55 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:22.721 08:21:55 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:25:22.721 08:21:55 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:25:22.721 08:21:55 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:25:22.721 08:21:55 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:22.980 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:22.980 Waiting for block devices as requested 00:25:22.980 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:25:22.980 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:25:23.238 08:21:56 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:25:23.238 08:21:56 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:25:23.238 08:21:56 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:25:23.238 08:21:56 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:25:23.238 08:21:56 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:25:23.238 08:21:56 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:25:23.238 08:21:56 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:25:23.238 08:21:56 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:25:23.238 08:21:56 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:25:23.238 08:21:56 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:25:23.238 08:21:56 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:25:23.238 08:21:56 -- common/autotest_common.sh@1530 -- # grep oacs 00:25:23.238 08:21:56 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:25:23.238 08:21:56 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:25:23.238 08:21:56 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:25:23.238 08:21:56 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:25:23.238 08:21:56 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:25:23.238 08:21:56 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:25:23.238 08:21:56 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:25:23.238 08:21:56 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:25:23.238 08:21:56 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:25:23.238 08:21:56 -- common/autotest_common.sh@1542 -- # continue 00:25:23.238 08:21:56 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:25:23.238 08:21:56 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:25:23.238 08:21:56 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:25:23.238 08:21:56 -- common/autotest_common.sh@1487 -- # grep 0000:00:07.0/nvme/nvme 00:25:23.238 08:21:56 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:25:23.238 08:21:56 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:25:23.238 08:21:56 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:25:23.238 08:21:56 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:25:23.238 08:21:56 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme1 00:25:23.238 08:21:56 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme1 ]] 00:25:23.238 08:21:56 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme1 00:25:23.238 08:21:56 -- common/autotest_common.sh@1530 -- # grep oacs 00:25:23.238 08:21:56 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:25:23.238 08:21:56 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:25:23.238 08:21:56 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:25:23.238 08:21:56 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:25:23.238 08:21:56 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme1 00:25:23.238 08:21:56 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:25:23.238 08:21:56 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:25:23.238 08:21:56 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:25:23.238 08:21:56 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:25:23.238 08:21:56 -- common/autotest_common.sh@1542 -- # continue 00:25:23.238 08:21:56 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:25:23.238 08:21:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:23.238 08:21:56 -- common/autotest_common.sh@10 -- # set +x 00:25:23.238 08:21:56 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:25:23.238 08:21:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:23.238 08:21:56 -- common/autotest_common.sh@10 -- # set +x 00:25:23.238 08:21:56 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:24.175 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:24.175 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:25:24.175 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:25:24.175 08:21:57 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:25:24.175 08:21:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:24.175 08:21:57 -- common/autotest_common.sh@10 -- # set +x 00:25:24.432 08:21:57 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:25:24.432 08:21:57 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:25:24.432 08:21:57 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:25:24.432 08:21:57 -- common/autotest_common.sh@1562 -- # bdfs=() 00:25:24.432 08:21:57 -- common/autotest_common.sh@1562 -- # local bdfs 00:25:24.432 08:21:57 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:25:24.432 08:21:57 -- common/autotest_common.sh@1498 -- # bdfs=() 00:25:24.432 08:21:57 -- common/autotest_common.sh@1498 -- # local bdfs 00:25:24.432 08:21:57 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:24.432 08:21:57 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:24.432 08:21:57 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:25:24.432 08:21:57 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:25:24.432 08:21:57 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:25:24.432 08:21:57 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:25:24.432 08:21:57 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:25:24.432 08:21:57 -- common/autotest_common.sh@1565 -- # device=0x0010 00:25:24.432 08:21:57 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:25:24.432 08:21:57 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:25:24.432 08:21:57 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:25:24.432 08:21:57 -- common/autotest_common.sh@1565 -- # device=0x0010 00:25:24.432 08:21:57 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:25:24.432 08:21:57 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:25:24.432 08:21:57 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:25:24.432 08:21:57 -- common/autotest_common.sh@1578 -- # return 0 00:25:24.432 08:21:57 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:25:24.432 08:21:57 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:25:24.432 08:21:57 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:25:24.432 08:21:57 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:25:24.432 08:21:57 -- spdk/autotest.sh@173 -- # timing_enter lib 00:25:24.433 08:21:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:24.433 08:21:57 -- common/autotest_common.sh@10 -- # set +x 00:25:24.433 08:21:57 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:25:24.433 08:21:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:24.433 08:21:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:24.433 08:21:57 -- common/autotest_common.sh@10 -- # set +x 00:25:24.433 ************************************ 00:25:24.433 START TEST env 00:25:24.433 ************************************ 00:25:24.433 08:21:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:25:24.433 * Looking for test storage... 00:25:24.433 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:25:24.433 08:21:57 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:25:24.433 08:21:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:24.433 08:21:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:24.433 08:21:57 -- common/autotest_common.sh@10 -- # set +x 00:25:24.433 ************************************ 00:25:24.433 START TEST env_memory 00:25:24.433 ************************************ 00:25:24.433 08:21:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:25:24.691 00:25:24.691 00:25:24.691 CUnit - A unit testing framework for C - Version 2.1-3 00:25:24.691 http://cunit.sourceforge.net/ 00:25:24.691 00:25:24.691 00:25:24.691 Suite: memory 00:25:24.691 Test: alloc and free memory map ...[2024-04-17 08:21:57.796000] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:25:24.691 passed 00:25:24.691 Test: mem map translation ...[2024-04-17 08:21:57.816741] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:25:24.691 [2024-04-17 08:21:57.816778] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:25:24.691 [2024-04-17 08:21:57.816813] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:25:24.691 [2024-04-17 08:21:57.816818] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:25:24.691 passed 00:25:24.691 Test: mem map registration ...[2024-04-17 08:21:57.859775] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:25:24.691 [2024-04-17 08:21:57.859826] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:25:24.691 passed 00:25:24.691 Test: mem map adjacent registrations ...passed 00:25:24.691 00:25:24.691 Run Summary: Type Total Ran Passed Failed Inactive 00:25:24.691 suites 1 1 n/a 0 0 00:25:24.691 tests 4 4 4 0 0 00:25:24.691 asserts 152 152 152 0 n/a 00:25:24.691 00:25:24.691 Elapsed time = 0.146 seconds 00:25:24.691 00:25:24.691 real 0m0.166s 00:25:24.691 user 0m0.153s 00:25:24.691 sys 0m0.012s 00:25:24.692 08:21:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:24.692 08:21:57 -- common/autotest_common.sh@10 -- # set +x 00:25:24.692 ************************************ 00:25:24.692 END TEST env_memory 00:25:24.692 ************************************ 00:25:24.692 08:21:57 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:25:24.692 08:21:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:24.692 08:21:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:24.692 08:21:57 -- common/autotest_common.sh@10 -- # set +x 00:25:24.692 ************************************ 00:25:24.692 START TEST env_vtophys 00:25:24.692 ************************************ 00:25:24.692 08:21:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:25:24.692 EAL: lib.eal log level changed from notice to debug 00:25:24.692 EAL: Detected lcore 0 as core 0 on socket 0 00:25:24.692 EAL: Detected lcore 1 as core 0 on socket 0 00:25:24.692 EAL: Detected lcore 2 as core 0 on socket 0 00:25:24.692 EAL: Detected lcore 3 as core 0 on socket 0 00:25:24.692 EAL: Detected lcore 4 as core 0 on socket 0 00:25:24.692 EAL: Detected lcore 5 as core 0 on socket 0 00:25:24.692 EAL: Detected lcore 6 as core 0 on socket 0 00:25:24.692 EAL: Detected lcore 7 as core 0 on socket 0 00:25:24.692 EAL: Detected lcore 8 as core 0 on socket 0 00:25:24.692 EAL: Detected lcore 9 as core 0 on socket 0 00:25:24.692 EAL: Maximum logical cores by configuration: 128 00:25:24.692 EAL: Detected CPU lcores: 10 00:25:24.692 EAL: Detected NUMA nodes: 1 00:25:24.692 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:25:24.692 EAL: Detected shared linkage of DPDK 00:25:24.692 EAL: No shared files mode enabled, IPC will be disabled 00:25:24.692 EAL: Selected IOVA mode 'PA' 00:25:24.692 EAL: Probing VFIO support... 00:25:24.692 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:25:24.692 EAL: VFIO modules not loaded, skipping VFIO support... 00:25:24.692 EAL: Ask a virtual area of 0x2e000 bytes 00:25:24.692 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:25:24.692 EAL: Setting up physically contiguous memory... 00:25:24.692 EAL: Setting maximum number of open files to 524288 00:25:24.692 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:25:24.692 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:25:24.692 EAL: Ask a virtual area of 0x61000 bytes 00:25:24.692 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:25:24.692 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:25:24.692 EAL: Ask a virtual area of 0x400000000 bytes 00:25:24.692 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:25:24.692 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:25:24.692 EAL: Ask a virtual area of 0x61000 bytes 00:25:24.692 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:25:24.692 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:25:24.692 EAL: Ask a virtual area of 0x400000000 bytes 00:25:24.692 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:25:24.692 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:25:24.692 EAL: Ask a virtual area of 0x61000 bytes 00:25:24.692 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:25:24.692 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:25:24.692 EAL: Ask a virtual area of 0x400000000 bytes 00:25:24.692 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:25:24.692 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:25:24.692 EAL: Ask a virtual area of 0x61000 bytes 00:25:24.692 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:25:24.692 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:25:24.692 EAL: Ask a virtual area of 0x400000000 bytes 00:25:24.692 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:25:24.692 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:25:24.692 EAL: Hugepages will be freed exactly as allocated. 00:25:24.692 EAL: No shared files mode enabled, IPC is disabled 00:25:24.692 EAL: No shared files mode enabled, IPC is disabled 00:25:24.950 EAL: TSC frequency is ~2290000 KHz 00:25:24.950 EAL: Main lcore 0 is ready (tid=7f892ee16a00;cpuset=[0]) 00:25:24.950 EAL: Trying to obtain current memory policy. 00:25:24.950 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:24.950 EAL: Restoring previous memory policy: 0 00:25:24.950 EAL: request: mp_malloc_sync 00:25:24.950 EAL: No shared files mode enabled, IPC is disabled 00:25:24.950 EAL: Heap on socket 0 was expanded by 2MB 00:25:24.950 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:25:24.950 EAL: No PCI address specified using 'addr=' in: bus=pci 00:25:24.950 EAL: Mem event callback 'spdk:(nil)' registered 00:25:24.950 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:25:24.950 00:25:24.950 00:25:24.950 CUnit - A unit testing framework for C - Version 2.1-3 00:25:24.950 http://cunit.sourceforge.net/ 00:25:24.950 00:25:24.950 00:25:24.950 Suite: components_suite 00:25:24.950 Test: vtophys_malloc_test ...passed 00:25:24.950 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:25:24.950 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:24.950 EAL: Restoring previous memory policy: 4 00:25:24.950 EAL: Calling mem event callback 'spdk:(nil)' 00:25:24.950 EAL: request: mp_malloc_sync 00:25:24.950 EAL: No shared files mode enabled, IPC is disabled 00:25:24.950 EAL: Heap on socket 0 was expanded by 4MB 00:25:24.951 EAL: Calling mem event callback 'spdk:(nil)' 00:25:24.951 EAL: request: mp_malloc_sync 00:25:24.951 EAL: No shared files mode enabled, IPC is disabled 00:25:24.951 EAL: Heap on socket 0 was shrunk by 4MB 00:25:24.951 EAL: Trying to obtain current memory policy. 00:25:24.951 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:24.951 EAL: Restoring previous memory policy: 4 00:25:24.951 EAL: Calling mem event callback 'spdk:(nil)' 00:25:24.951 EAL: request: mp_malloc_sync 00:25:24.951 EAL: No shared files mode enabled, IPC is disabled 00:25:24.951 EAL: Heap on socket 0 was expanded by 6MB 00:25:24.951 EAL: Calling mem event callback 'spdk:(nil)' 00:25:24.951 EAL: request: mp_malloc_sync 00:25:24.951 EAL: No shared files mode enabled, IPC is disabled 00:25:24.951 EAL: Heap on socket 0 was shrunk by 6MB 00:25:24.951 EAL: Trying to obtain current memory policy. 00:25:24.951 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:24.951 EAL: Restoring previous memory policy: 4 00:25:24.951 EAL: Calling mem event callback 'spdk:(nil)' 00:25:24.951 EAL: request: mp_malloc_sync 00:25:24.951 EAL: No shared files mode enabled, IPC is disabled 00:25:24.951 EAL: Heap on socket 0 was expanded by 10MB 00:25:24.951 EAL: Calling mem event callback 'spdk:(nil)' 00:25:24.951 EAL: request: mp_malloc_sync 00:25:24.951 EAL: No shared files mode enabled, IPC is disabled 00:25:24.951 EAL: Heap on socket 0 was shrunk by 10MB 00:25:24.951 EAL: Trying to obtain current memory policy. 00:25:24.951 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:24.951 EAL: Restoring previous memory policy: 4 00:25:24.951 EAL: Calling mem event callback 'spdk:(nil)' 00:25:24.951 EAL: request: mp_malloc_sync 00:25:24.951 EAL: No shared files mode enabled, IPC is disabled 00:25:24.951 EAL: Heap on socket 0 was expanded by 18MB 00:25:24.951 EAL: Calling mem event callback 'spdk:(nil)' 00:25:24.951 EAL: request: mp_malloc_sync 00:25:24.951 EAL: No shared files mode enabled, IPC is disabled 00:25:24.951 EAL: Heap on socket 0 was shrunk by 18MB 00:25:24.951 EAL: Trying to obtain current memory policy. 00:25:24.951 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:24.951 EAL: Restoring previous memory policy: 4 00:25:24.951 EAL: Calling mem event callback 'spdk:(nil)' 00:25:24.951 EAL: request: mp_malloc_sync 00:25:24.951 EAL: No shared files mode enabled, IPC is disabled 00:25:24.951 EAL: Heap on socket 0 was expanded by 34MB 00:25:24.951 EAL: Calling mem event callback 'spdk:(nil)' 00:25:24.951 EAL: request: mp_malloc_sync 00:25:24.951 EAL: No shared files mode enabled, IPC is disabled 00:25:24.951 EAL: Heap on socket 0 was shrunk by 34MB 00:25:24.951 EAL: Trying to obtain current memory policy. 00:25:24.951 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:24.951 EAL: Restoring previous memory policy: 4 00:25:24.951 EAL: Calling mem event callback 'spdk:(nil)' 00:25:24.951 EAL: request: mp_malloc_sync 00:25:24.951 EAL: No shared files mode enabled, IPC is disabled 00:25:24.951 EAL: Heap on socket 0 was expanded by 66MB 00:25:24.951 EAL: Calling mem event callback 'spdk:(nil)' 00:25:24.951 EAL: request: mp_malloc_sync 00:25:24.951 EAL: No shared files mode enabled, IPC is disabled 00:25:24.951 EAL: Heap on socket 0 was shrunk by 66MB 00:25:24.951 EAL: Trying to obtain current memory policy. 00:25:24.951 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:24.951 EAL: Restoring previous memory policy: 4 00:25:24.951 EAL: Calling mem event callback 'spdk:(nil)' 00:25:24.951 EAL: request: mp_malloc_sync 00:25:24.951 EAL: No shared files mode enabled, IPC is disabled 00:25:24.951 EAL: Heap on socket 0 was expanded by 130MB 00:25:24.951 EAL: Calling mem event callback 'spdk:(nil)' 00:25:24.951 EAL: request: mp_malloc_sync 00:25:24.951 EAL: No shared files mode enabled, IPC is disabled 00:25:24.951 EAL: Heap on socket 0 was shrunk by 130MB 00:25:24.951 EAL: Trying to obtain current memory policy. 00:25:24.951 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:25.210 EAL: Restoring previous memory policy: 4 00:25:25.210 EAL: Calling mem event callback 'spdk:(nil)' 00:25:25.210 EAL: request: mp_malloc_sync 00:25:25.210 EAL: No shared files mode enabled, IPC is disabled 00:25:25.210 EAL: Heap on socket 0 was expanded by 258MB 00:25:25.210 EAL: Calling mem event callback 'spdk:(nil)' 00:25:25.210 EAL: request: mp_malloc_sync 00:25:25.210 EAL: No shared files mode enabled, IPC is disabled 00:25:25.210 EAL: Heap on socket 0 was shrunk by 258MB 00:25:25.210 EAL: Trying to obtain current memory policy. 00:25:25.210 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:25.210 EAL: Restoring previous memory policy: 4 00:25:25.210 EAL: Calling mem event callback 'spdk:(nil)' 00:25:25.210 EAL: request: mp_malloc_sync 00:25:25.210 EAL: No shared files mode enabled, IPC is disabled 00:25:25.210 EAL: Heap on socket 0 was expanded by 514MB 00:25:25.470 EAL: Calling mem event callback 'spdk:(nil)' 00:25:25.470 EAL: request: mp_malloc_sync 00:25:25.470 EAL: No shared files mode enabled, IPC is disabled 00:25:25.470 EAL: Heap on socket 0 was shrunk by 514MB 00:25:25.470 EAL: Trying to obtain current memory policy. 00:25:25.470 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:25.729 EAL: Restoring previous memory policy: 4 00:25:25.729 EAL: Calling mem event callback 'spdk:(nil)' 00:25:25.729 EAL: request: mp_malloc_sync 00:25:25.729 EAL: No shared files mode enabled, IPC is disabled 00:25:25.729 EAL: Heap on socket 0 was expanded by 1026MB 00:25:25.729 EAL: Calling mem event callback 'spdk:(nil)' 00:25:25.990 passed 00:25:25.990 00:25:25.990 Run Summary: Type Total Ran Passed Failed Inactive 00:25:25.990 suites 1 1 n/a 0 0 00:25:25.990 tests 2 2 2 0 0 00:25:25.990 asserts 5169 5169 5169 0 n/a 00:25:25.990 00:25:25.990 Elapsed time = 0.988 seconds 00:25:25.990 EAL: request: mp_malloc_sync 00:25:25.990 EAL: No shared files mode enabled, IPC is disabled 00:25:25.990 EAL: Heap on socket 0 was shrunk by 1026MB 00:25:25.990 EAL: Calling mem event callback 'spdk:(nil)' 00:25:25.990 EAL: request: mp_malloc_sync 00:25:25.990 EAL: No shared files mode enabled, IPC is disabled 00:25:25.990 EAL: Heap on socket 0 was shrunk by 2MB 00:25:25.990 EAL: No shared files mode enabled, IPC is disabled 00:25:25.990 EAL: No shared files mode enabled, IPC is disabled 00:25:25.991 EAL: No shared files mode enabled, IPC is disabled 00:25:25.991 00:25:25.991 real 0m1.192s 00:25:25.991 user 0m0.645s 00:25:25.991 sys 0m0.417s 00:25:25.991 08:21:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:25.991 08:21:59 -- common/autotest_common.sh@10 -- # set +x 00:25:25.991 ************************************ 00:25:25.991 END TEST env_vtophys 00:25:25.991 ************************************ 00:25:25.991 08:21:59 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:25:25.991 08:21:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:25.991 08:21:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:25.991 08:21:59 -- common/autotest_common.sh@10 -- # set +x 00:25:25.991 ************************************ 00:25:25.991 START TEST env_pci 00:25:25.991 ************************************ 00:25:25.991 08:21:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:25:25.991 00:25:25.991 00:25:25.991 CUnit - A unit testing framework for C - Version 2.1-3 00:25:25.991 http://cunit.sourceforge.net/ 00:25:25.991 00:25:25.991 00:25:25.991 Suite: pci 00:25:25.991 Test: pci_hook ...[2024-04-17 08:21:59.234714] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 55683 has claimed it 00:25:25.991 passed 00:25:25.991 00:25:25.991 Run Summary: Type Total Ran Passed Failed Inactive 00:25:25.991 suites 1 1 n/a 0 0 00:25:25.991 tests 1 1 1 0 0 00:25:25.991 asserts 25 25 25 0 n/a 00:25:25.991 00:25:25.991 Elapsed time = 0.002 seconds 00:25:25.991 EAL: Cannot find device (10000:00:01.0) 00:25:25.991 EAL: Failed to attach device on primary process 00:25:25.991 00:25:25.991 real 0m0.021s 00:25:25.991 user 0m0.010s 00:25:25.991 sys 0m0.012s 00:25:25.991 08:21:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:25.991 08:21:59 -- common/autotest_common.sh@10 -- # set +x 00:25:25.991 ************************************ 00:25:25.991 END TEST env_pci 00:25:25.991 ************************************ 00:25:25.991 08:21:59 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:25:25.991 08:21:59 -- env/env.sh@15 -- # uname 00:25:25.991 08:21:59 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:25:25.991 08:21:59 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:25:25.991 08:21:59 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:25:25.991 08:21:59 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:25:25.991 08:21:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:25.991 08:21:59 -- common/autotest_common.sh@10 -- # set +x 00:25:25.991 ************************************ 00:25:25.991 START TEST env_dpdk_post_init 00:25:25.991 ************************************ 00:25:25.991 08:21:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:25:26.252 EAL: Detected CPU lcores: 10 00:25:26.252 EAL: Detected NUMA nodes: 1 00:25:26.252 EAL: Detected shared linkage of DPDK 00:25:26.252 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:25:26.252 EAL: Selected IOVA mode 'PA' 00:25:26.252 TELEMETRY: No legacy callbacks, legacy socket not created 00:25:26.252 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:25:26.252 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:25:26.252 Starting DPDK initialization... 00:25:26.252 Starting SPDK post initialization... 00:25:26.252 SPDK NVMe probe 00:25:26.252 Attaching to 0000:00:06.0 00:25:26.252 Attaching to 0000:00:07.0 00:25:26.252 Attached to 0000:00:06.0 00:25:26.252 Attached to 0000:00:07.0 00:25:26.252 Cleaning up... 00:25:26.252 00:25:26.252 real 0m0.186s 00:25:26.252 user 0m0.047s 00:25:26.252 sys 0m0.039s 00:25:26.252 08:21:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:26.252 08:21:59 -- common/autotest_common.sh@10 -- # set +x 00:25:26.252 ************************************ 00:25:26.252 END TEST env_dpdk_post_init 00:25:26.252 ************************************ 00:25:26.252 08:21:59 -- env/env.sh@26 -- # uname 00:25:26.252 08:21:59 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:25:26.252 08:21:59 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:25:26.252 08:21:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:26.252 08:21:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:26.252 08:21:59 -- common/autotest_common.sh@10 -- # set +x 00:25:26.252 ************************************ 00:25:26.252 START TEST env_mem_callbacks 00:25:26.252 ************************************ 00:25:26.252 08:21:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:25:26.511 EAL: Detected CPU lcores: 10 00:25:26.511 EAL: Detected NUMA nodes: 1 00:25:26.511 EAL: Detected shared linkage of DPDK 00:25:26.511 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:25:26.511 EAL: Selected IOVA mode 'PA' 00:25:26.511 00:25:26.511 00:25:26.511 CUnit - A unit testing framework for C - Version 2.1-3 00:25:26.511 http://cunit.sourceforge.net/ 00:25:26.511 00:25:26.511 00:25:26.511 Suite: memoryTELEMETRY: No legacy callbacks, legacy socket not created 00:25:26.511 00:25:26.511 Test: test ... 00:25:26.511 register 0x200000200000 2097152 00:25:26.511 malloc 3145728 00:25:26.511 register 0x200000400000 4194304 00:25:26.511 buf 0x200000500000 len 3145728 PASSED 00:25:26.511 malloc 64 00:25:26.511 buf 0x2000004fff40 len 64 PASSED 00:25:26.511 malloc 4194304 00:25:26.511 register 0x200000800000 6291456 00:25:26.511 buf 0x200000a00000 len 4194304 PASSED 00:25:26.511 free 0x200000500000 3145728 00:25:26.511 free 0x2000004fff40 64 00:25:26.511 unregister 0x200000400000 4194304 PASSED 00:25:26.511 free 0x200000a00000 4194304 00:25:26.511 unregister 0x200000800000 6291456 PASSED 00:25:26.511 malloc 8388608 00:25:26.511 register 0x200000400000 10485760 00:25:26.511 buf 0x200000600000 len 8388608 PASSED 00:25:26.511 free 0x200000600000 8388608 00:25:26.511 unregister 0x200000400000 10485760 PASSED 00:25:26.511 passed 00:25:26.511 00:25:26.511 Run Summary: Type Total Ran Passed Failed Inactive 00:25:26.511 suites 1 1 n/a 0 0 00:25:26.511 tests 1 1 1 0 0 00:25:26.511 asserts 15 15 15 0 n/a 00:25:26.511 00:25:26.511 Elapsed time = 0.009 seconds 00:25:26.511 00:25:26.511 real 0m0.149s 00:25:26.511 user 0m0.017s 00:25:26.511 sys 0m0.031s 00:25:26.511 08:21:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:26.511 08:21:59 -- common/autotest_common.sh@10 -- # set +x 00:25:26.511 ************************************ 00:25:26.511 END TEST env_mem_callbacks 00:25:26.511 ************************************ 00:25:26.511 00:25:26.511 real 0m2.113s 00:25:26.511 user 0m1.007s 00:25:26.511 sys 0m0.782s 00:25:26.511 08:21:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:26.511 08:21:59 -- common/autotest_common.sh@10 -- # set +x 00:25:26.511 ************************************ 00:25:26.511 END TEST env 00:25:26.511 ************************************ 00:25:26.511 08:21:59 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:25:26.511 08:21:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:26.511 08:21:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:26.511 08:21:59 -- common/autotest_common.sh@10 -- # set +x 00:25:26.511 ************************************ 00:25:26.511 START TEST rpc 00:25:26.511 ************************************ 00:25:26.511 08:21:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:25:26.769 * Looking for test storage... 00:25:26.769 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:25:26.769 08:21:59 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:25:26.769 08:21:59 -- rpc/rpc.sh@65 -- # spdk_pid=55797 00:25:26.769 08:21:59 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:25:26.769 08:21:59 -- rpc/rpc.sh@67 -- # waitforlisten 55797 00:25:26.769 08:21:59 -- common/autotest_common.sh@819 -- # '[' -z 55797 ']' 00:25:26.769 08:21:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:26.769 08:21:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:26.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:26.769 08:21:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:26.769 08:21:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:26.769 08:21:59 -- common/autotest_common.sh@10 -- # set +x 00:25:26.769 [2024-04-17 08:21:59.972711] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:26.769 [2024-04-17 08:21:59.972827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55797 ] 00:25:27.027 [2024-04-17 08:22:00.115821] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.027 [2024-04-17 08:22:00.219438] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:27.027 [2024-04-17 08:22:00.219587] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:25:27.027 [2024-04-17 08:22:00.219596] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 55797' to capture a snapshot of events at runtime. 00:25:27.027 [2024-04-17 08:22:00.219602] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid55797 for offline analysis/debug. 00:25:27.027 [2024-04-17 08:22:00.219624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.594 08:22:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:27.594 08:22:00 -- common/autotest_common.sh@852 -- # return 0 00:25:27.594 08:22:00 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:25:27.594 08:22:00 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:25:27.594 08:22:00 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:25:27.594 08:22:00 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:25:27.594 08:22:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:27.594 08:22:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:27.594 08:22:00 -- common/autotest_common.sh@10 -- # set +x 00:25:27.594 ************************************ 00:25:27.594 START TEST rpc_integrity 00:25:27.594 ************************************ 00:25:27.594 08:22:00 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:25:27.594 08:22:00 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:25:27.594 08:22:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.594 08:22:00 -- common/autotest_common.sh@10 -- # set +x 00:25:27.594 08:22:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.594 08:22:00 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:25:27.594 08:22:00 -- rpc/rpc.sh@13 -- # jq length 00:25:27.851 08:22:00 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:25:27.851 08:22:00 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:25:27.851 08:22:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.851 08:22:00 -- common/autotest_common.sh@10 -- # set +x 00:25:27.851 08:22:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.851 08:22:00 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:25:27.851 08:22:00 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:25:27.851 08:22:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.851 08:22:00 -- common/autotest_common.sh@10 -- # set +x 00:25:27.851 08:22:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.851 08:22:00 -- rpc/rpc.sh@16 -- # bdevs='[ 00:25:27.851 { 00:25:27.851 "aliases": [ 00:25:27.851 "48dcb076-060a-4b89-a43f-a1808d6e3ddd" 00:25:27.851 ], 00:25:27.851 "assigned_rate_limits": { 00:25:27.851 "r_mbytes_per_sec": 0, 00:25:27.851 "rw_ios_per_sec": 0, 00:25:27.851 "rw_mbytes_per_sec": 0, 00:25:27.851 "w_mbytes_per_sec": 0 00:25:27.851 }, 00:25:27.851 "block_size": 512, 00:25:27.851 "claimed": false, 00:25:27.851 "driver_specific": {}, 00:25:27.851 "memory_domains": [ 00:25:27.851 { 00:25:27.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:27.851 "dma_device_type": 2 00:25:27.851 } 00:25:27.851 ], 00:25:27.851 "name": "Malloc0", 00:25:27.851 "num_blocks": 16384, 00:25:27.851 "product_name": "Malloc disk", 00:25:27.851 "supported_io_types": { 00:25:27.851 "abort": true, 00:25:27.851 "compare": false, 00:25:27.851 "compare_and_write": false, 00:25:27.851 "flush": true, 00:25:27.851 "nvme_admin": false, 00:25:27.851 "nvme_io": false, 00:25:27.851 "read": true, 00:25:27.851 "reset": true, 00:25:27.851 "unmap": true, 00:25:27.851 "write": true, 00:25:27.851 "write_zeroes": true 00:25:27.851 }, 00:25:27.851 "uuid": "48dcb076-060a-4b89-a43f-a1808d6e3ddd", 00:25:27.851 "zoned": false 00:25:27.851 } 00:25:27.851 ]' 00:25:27.851 08:22:00 -- rpc/rpc.sh@17 -- # jq length 00:25:27.851 08:22:01 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:25:27.851 08:22:01 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:25:27.851 08:22:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.851 08:22:01 -- common/autotest_common.sh@10 -- # set +x 00:25:27.851 [2024-04-17 08:22:01.045218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:25:27.851 [2024-04-17 08:22:01.045260] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:27.851 [2024-04-17 08:22:01.045273] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21f03a0 00:25:27.851 [2024-04-17 08:22:01.045279] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:27.851 [2024-04-17 08:22:01.046743] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:27.851 [2024-04-17 08:22:01.046771] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:25:27.851 Passthru0 00:25:27.851 08:22:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.851 08:22:01 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:25:27.851 08:22:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.851 08:22:01 -- common/autotest_common.sh@10 -- # set +x 00:25:27.851 08:22:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.851 08:22:01 -- rpc/rpc.sh@20 -- # bdevs='[ 00:25:27.851 { 00:25:27.851 "aliases": [ 00:25:27.851 "48dcb076-060a-4b89-a43f-a1808d6e3ddd" 00:25:27.851 ], 00:25:27.851 "assigned_rate_limits": { 00:25:27.851 "r_mbytes_per_sec": 0, 00:25:27.851 "rw_ios_per_sec": 0, 00:25:27.851 "rw_mbytes_per_sec": 0, 00:25:27.851 "w_mbytes_per_sec": 0 00:25:27.851 }, 00:25:27.851 "block_size": 512, 00:25:27.851 "claim_type": "exclusive_write", 00:25:27.851 "claimed": true, 00:25:27.851 "driver_specific": {}, 00:25:27.851 "memory_domains": [ 00:25:27.851 { 00:25:27.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:27.851 "dma_device_type": 2 00:25:27.851 } 00:25:27.851 ], 00:25:27.851 "name": "Malloc0", 00:25:27.851 "num_blocks": 16384, 00:25:27.851 "product_name": "Malloc disk", 00:25:27.851 "supported_io_types": { 00:25:27.851 "abort": true, 00:25:27.851 "compare": false, 00:25:27.851 "compare_and_write": false, 00:25:27.851 "flush": true, 00:25:27.851 "nvme_admin": false, 00:25:27.851 "nvme_io": false, 00:25:27.851 "read": true, 00:25:27.852 "reset": true, 00:25:27.852 "unmap": true, 00:25:27.852 "write": true, 00:25:27.852 "write_zeroes": true 00:25:27.852 }, 00:25:27.852 "uuid": "48dcb076-060a-4b89-a43f-a1808d6e3ddd", 00:25:27.852 "zoned": false 00:25:27.852 }, 00:25:27.852 { 00:25:27.852 "aliases": [ 00:25:27.852 "18536c36-6d54-5a2c-83e4-a7dc07e1e86d" 00:25:27.852 ], 00:25:27.852 "assigned_rate_limits": { 00:25:27.852 "r_mbytes_per_sec": 0, 00:25:27.852 "rw_ios_per_sec": 0, 00:25:27.852 "rw_mbytes_per_sec": 0, 00:25:27.852 "w_mbytes_per_sec": 0 00:25:27.852 }, 00:25:27.852 "block_size": 512, 00:25:27.852 "claimed": false, 00:25:27.852 "driver_specific": { 00:25:27.852 "passthru": { 00:25:27.852 "base_bdev_name": "Malloc0", 00:25:27.852 "name": "Passthru0" 00:25:27.852 } 00:25:27.852 }, 00:25:27.852 "memory_domains": [ 00:25:27.852 { 00:25:27.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:27.852 "dma_device_type": 2 00:25:27.852 } 00:25:27.852 ], 00:25:27.852 "name": "Passthru0", 00:25:27.852 "num_blocks": 16384, 00:25:27.852 "product_name": "passthru", 00:25:27.852 "supported_io_types": { 00:25:27.852 "abort": true, 00:25:27.852 "compare": false, 00:25:27.852 "compare_and_write": false, 00:25:27.852 "flush": true, 00:25:27.852 "nvme_admin": false, 00:25:27.852 "nvme_io": false, 00:25:27.852 "read": true, 00:25:27.852 "reset": true, 00:25:27.852 "unmap": true, 00:25:27.852 "write": true, 00:25:27.852 "write_zeroes": true 00:25:27.852 }, 00:25:27.852 "uuid": "18536c36-6d54-5a2c-83e4-a7dc07e1e86d", 00:25:27.852 "zoned": false 00:25:27.852 } 00:25:27.852 ]' 00:25:27.852 08:22:01 -- rpc/rpc.sh@21 -- # jq length 00:25:27.852 08:22:01 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:25:27.852 08:22:01 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:25:27.852 08:22:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.852 08:22:01 -- common/autotest_common.sh@10 -- # set +x 00:25:27.852 08:22:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.852 08:22:01 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:27.852 08:22:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.852 08:22:01 -- common/autotest_common.sh@10 -- # set +x 00:25:27.852 08:22:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.852 08:22:01 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:25:27.852 08:22:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.852 08:22:01 -- common/autotest_common.sh@10 -- # set +x 00:25:27.852 08:22:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.852 08:22:01 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:25:27.852 08:22:01 -- rpc/rpc.sh@26 -- # jq length 00:25:28.109 ************************************ 00:25:28.109 END TEST rpc_integrity 00:25:28.109 ************************************ 00:25:28.109 08:22:01 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:25:28.109 00:25:28.109 real 0m0.322s 00:25:28.109 user 0m0.193s 00:25:28.109 sys 0m0.046s 00:25:28.109 08:22:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:28.109 08:22:01 -- common/autotest_common.sh@10 -- # set +x 00:25:28.109 08:22:01 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:25:28.109 08:22:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:28.109 08:22:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:28.109 08:22:01 -- common/autotest_common.sh@10 -- # set +x 00:25:28.109 ************************************ 00:25:28.109 START TEST rpc_plugins 00:25:28.109 ************************************ 00:25:28.109 08:22:01 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:25:28.109 08:22:01 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:25:28.109 08:22:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.109 08:22:01 -- common/autotest_common.sh@10 -- # set +x 00:25:28.109 08:22:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.109 08:22:01 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:25:28.109 08:22:01 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:25:28.109 08:22:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.109 08:22:01 -- common/autotest_common.sh@10 -- # set +x 00:25:28.109 08:22:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.109 08:22:01 -- rpc/rpc.sh@31 -- # bdevs='[ 00:25:28.109 { 00:25:28.109 "aliases": [ 00:25:28.109 "8f50f92a-c51f-4a1b-bc37-69858dc570c8" 00:25:28.109 ], 00:25:28.109 "assigned_rate_limits": { 00:25:28.109 "r_mbytes_per_sec": 0, 00:25:28.109 "rw_ios_per_sec": 0, 00:25:28.109 "rw_mbytes_per_sec": 0, 00:25:28.109 "w_mbytes_per_sec": 0 00:25:28.109 }, 00:25:28.109 "block_size": 4096, 00:25:28.109 "claimed": false, 00:25:28.109 "driver_specific": {}, 00:25:28.109 "memory_domains": [ 00:25:28.109 { 00:25:28.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:28.109 "dma_device_type": 2 00:25:28.109 } 00:25:28.109 ], 00:25:28.109 "name": "Malloc1", 00:25:28.109 "num_blocks": 256, 00:25:28.109 "product_name": "Malloc disk", 00:25:28.109 "supported_io_types": { 00:25:28.109 "abort": true, 00:25:28.109 "compare": false, 00:25:28.109 "compare_and_write": false, 00:25:28.109 "flush": true, 00:25:28.109 "nvme_admin": false, 00:25:28.109 "nvme_io": false, 00:25:28.109 "read": true, 00:25:28.109 "reset": true, 00:25:28.109 "unmap": true, 00:25:28.109 "write": true, 00:25:28.109 "write_zeroes": true 00:25:28.109 }, 00:25:28.109 "uuid": "8f50f92a-c51f-4a1b-bc37-69858dc570c8", 00:25:28.109 "zoned": false 00:25:28.109 } 00:25:28.109 ]' 00:25:28.109 08:22:01 -- rpc/rpc.sh@32 -- # jq length 00:25:28.109 08:22:01 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:25:28.109 08:22:01 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:25:28.109 08:22:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.109 08:22:01 -- common/autotest_common.sh@10 -- # set +x 00:25:28.109 08:22:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.109 08:22:01 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:25:28.109 08:22:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.109 08:22:01 -- common/autotest_common.sh@10 -- # set +x 00:25:28.109 08:22:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.109 08:22:01 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:25:28.109 08:22:01 -- rpc/rpc.sh@36 -- # jq length 00:25:28.109 ************************************ 00:25:28.109 END TEST rpc_plugins 00:25:28.109 ************************************ 00:25:28.109 08:22:01 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:25:28.109 00:25:28.109 real 0m0.152s 00:25:28.109 user 0m0.090s 00:25:28.109 sys 0m0.023s 00:25:28.109 08:22:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:28.109 08:22:01 -- common/autotest_common.sh@10 -- # set +x 00:25:28.366 08:22:01 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:25:28.366 08:22:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:28.366 08:22:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:28.366 08:22:01 -- common/autotest_common.sh@10 -- # set +x 00:25:28.366 ************************************ 00:25:28.366 START TEST rpc_trace_cmd_test 00:25:28.366 ************************************ 00:25:28.366 08:22:01 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:25:28.366 08:22:01 -- rpc/rpc.sh@40 -- # local info 00:25:28.366 08:22:01 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:25:28.366 08:22:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.366 08:22:01 -- common/autotest_common.sh@10 -- # set +x 00:25:28.366 08:22:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.366 08:22:01 -- rpc/rpc.sh@42 -- # info='{ 00:25:28.366 "bdev": { 00:25:28.366 "mask": "0x8", 00:25:28.366 "tpoint_mask": "0xffffffffffffffff" 00:25:28.366 }, 00:25:28.366 "bdev_nvme": { 00:25:28.366 "mask": "0x4000", 00:25:28.366 "tpoint_mask": "0x0" 00:25:28.366 }, 00:25:28.366 "blobfs": { 00:25:28.366 "mask": "0x80", 00:25:28.366 "tpoint_mask": "0x0" 00:25:28.366 }, 00:25:28.366 "dsa": { 00:25:28.366 "mask": "0x200", 00:25:28.366 "tpoint_mask": "0x0" 00:25:28.366 }, 00:25:28.366 "ftl": { 00:25:28.366 "mask": "0x40", 00:25:28.366 "tpoint_mask": "0x0" 00:25:28.366 }, 00:25:28.366 "iaa": { 00:25:28.366 "mask": "0x1000", 00:25:28.366 "tpoint_mask": "0x0" 00:25:28.366 }, 00:25:28.366 "iscsi_conn": { 00:25:28.366 "mask": "0x2", 00:25:28.366 "tpoint_mask": "0x0" 00:25:28.366 }, 00:25:28.366 "nvme_pcie": { 00:25:28.366 "mask": "0x800", 00:25:28.366 "tpoint_mask": "0x0" 00:25:28.366 }, 00:25:28.366 "nvme_tcp": { 00:25:28.366 "mask": "0x2000", 00:25:28.366 "tpoint_mask": "0x0" 00:25:28.366 }, 00:25:28.366 "nvmf_rdma": { 00:25:28.367 "mask": "0x10", 00:25:28.367 "tpoint_mask": "0x0" 00:25:28.367 }, 00:25:28.367 "nvmf_tcp": { 00:25:28.367 "mask": "0x20", 00:25:28.367 "tpoint_mask": "0x0" 00:25:28.367 }, 00:25:28.367 "scsi": { 00:25:28.367 "mask": "0x4", 00:25:28.367 "tpoint_mask": "0x0" 00:25:28.367 }, 00:25:28.367 "thread": { 00:25:28.367 "mask": "0x400", 00:25:28.367 "tpoint_mask": "0x0" 00:25:28.367 }, 00:25:28.367 "tpoint_group_mask": "0x8", 00:25:28.367 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid55797" 00:25:28.367 }' 00:25:28.367 08:22:01 -- rpc/rpc.sh@43 -- # jq length 00:25:28.367 08:22:01 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:25:28.367 08:22:01 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:25:28.367 08:22:01 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:25:28.367 08:22:01 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:25:28.367 08:22:01 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:25:28.367 08:22:01 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:25:28.367 08:22:01 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:25:28.367 08:22:01 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:25:28.624 ************************************ 00:25:28.624 END TEST rpc_trace_cmd_test 00:25:28.624 ************************************ 00:25:28.624 08:22:01 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:25:28.624 00:25:28.624 real 0m0.239s 00:25:28.624 user 0m0.179s 00:25:28.624 sys 0m0.036s 00:25:28.624 08:22:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:28.624 08:22:01 -- common/autotest_common.sh@10 -- # set +x 00:25:28.624 08:22:01 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:25:28.624 08:22:01 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:25:28.624 08:22:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:28.624 08:22:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:28.624 08:22:01 -- common/autotest_common.sh@10 -- # set +x 00:25:28.624 ************************************ 00:25:28.624 START TEST go_rpc 00:25:28.624 ************************************ 00:25:28.624 08:22:01 -- common/autotest_common.sh@1104 -- # go_rpc 00:25:28.624 08:22:01 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:25:28.624 08:22:01 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:25:28.624 08:22:01 -- rpc/rpc.sh@52 -- # jq length 00:25:28.624 08:22:01 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:25:28.624 08:22:01 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:25:28.624 08:22:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.624 08:22:01 -- common/autotest_common.sh@10 -- # set +x 00:25:28.624 08:22:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.624 08:22:01 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:25:28.624 08:22:01 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:25:28.624 08:22:01 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["4d950b94-c999-41fa-abc5-b39e0b99d55c"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"4d950b94-c999-41fa-abc5-b39e0b99d55c","zoned":false}]' 00:25:28.624 08:22:01 -- rpc/rpc.sh@57 -- # jq length 00:25:28.624 08:22:01 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:25:28.624 08:22:01 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:25:28.624 08:22:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.624 08:22:01 -- common/autotest_common.sh@10 -- # set +x 00:25:28.624 08:22:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.624 08:22:01 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:25:28.624 08:22:01 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:25:28.624 08:22:01 -- rpc/rpc.sh@61 -- # jq length 00:25:28.882 ************************************ 00:25:28.882 END TEST go_rpc 00:25:28.882 ************************************ 00:25:28.882 08:22:01 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:25:28.882 00:25:28.882 real 0m0.217s 00:25:28.882 user 0m0.139s 00:25:28.882 sys 0m0.048s 00:25:28.882 08:22:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:28.882 08:22:01 -- common/autotest_common.sh@10 -- # set +x 00:25:28.882 08:22:02 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:25:28.882 08:22:02 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:25:28.882 08:22:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:28.882 08:22:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:28.882 08:22:02 -- common/autotest_common.sh@10 -- # set +x 00:25:28.882 ************************************ 00:25:28.882 START TEST rpc_daemon_integrity 00:25:28.882 ************************************ 00:25:28.882 08:22:02 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:25:28.882 08:22:02 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:25:28.882 08:22:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.882 08:22:02 -- common/autotest_common.sh@10 -- # set +x 00:25:28.882 08:22:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.882 08:22:02 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:25:28.882 08:22:02 -- rpc/rpc.sh@13 -- # jq length 00:25:28.882 08:22:02 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:25:28.882 08:22:02 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:25:28.882 08:22:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.882 08:22:02 -- common/autotest_common.sh@10 -- # set +x 00:25:28.882 08:22:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.882 08:22:02 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:25:28.882 08:22:02 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:25:28.882 08:22:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.882 08:22:02 -- common/autotest_common.sh@10 -- # set +x 00:25:28.882 08:22:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.882 08:22:02 -- rpc/rpc.sh@16 -- # bdevs='[ 00:25:28.882 { 00:25:28.882 "aliases": [ 00:25:28.882 "947c9235-cc48-4913-b9e4-1c58375a9ee3" 00:25:28.882 ], 00:25:28.882 "assigned_rate_limits": { 00:25:28.882 "r_mbytes_per_sec": 0, 00:25:28.882 "rw_ios_per_sec": 0, 00:25:28.882 "rw_mbytes_per_sec": 0, 00:25:28.882 "w_mbytes_per_sec": 0 00:25:28.882 }, 00:25:28.882 "block_size": 512, 00:25:28.882 "claimed": false, 00:25:28.882 "driver_specific": {}, 00:25:28.882 "memory_domains": [ 00:25:28.882 { 00:25:28.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:28.882 "dma_device_type": 2 00:25:28.882 } 00:25:28.882 ], 00:25:28.882 "name": "Malloc3", 00:25:28.882 "num_blocks": 16384, 00:25:28.882 "product_name": "Malloc disk", 00:25:28.882 "supported_io_types": { 00:25:28.882 "abort": true, 00:25:28.882 "compare": false, 00:25:28.882 "compare_and_write": false, 00:25:28.882 "flush": true, 00:25:28.882 "nvme_admin": false, 00:25:28.882 "nvme_io": false, 00:25:28.882 "read": true, 00:25:28.882 "reset": true, 00:25:28.882 "unmap": true, 00:25:28.882 "write": true, 00:25:28.882 "write_zeroes": true 00:25:28.882 }, 00:25:28.882 "uuid": "947c9235-cc48-4913-b9e4-1c58375a9ee3", 00:25:28.882 "zoned": false 00:25:28.882 } 00:25:28.882 ]' 00:25:28.882 08:22:02 -- rpc/rpc.sh@17 -- # jq length 00:25:28.882 08:22:02 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:25:28.882 08:22:02 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:25:28.882 08:22:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.882 08:22:02 -- common/autotest_common.sh@10 -- # set +x 00:25:28.882 [2024-04-17 08:22:02.195430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:25:28.882 [2024-04-17 08:22:02.195480] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:28.882 [2024-04-17 08:22:02.195500] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21ef8c0 00:25:28.882 [2024-04-17 08:22:02.195506] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:28.882 [2024-04-17 08:22:02.196957] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:28.882 [2024-04-17 08:22:02.196989] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:25:28.882 Passthru0 00:25:28.882 08:22:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.882 08:22:02 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:25:28.882 08:22:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.882 08:22:02 -- common/autotest_common.sh@10 -- # set +x 00:25:29.139 08:22:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:29.139 08:22:02 -- rpc/rpc.sh@20 -- # bdevs='[ 00:25:29.139 { 00:25:29.139 "aliases": [ 00:25:29.139 "947c9235-cc48-4913-b9e4-1c58375a9ee3" 00:25:29.139 ], 00:25:29.139 "assigned_rate_limits": { 00:25:29.139 "r_mbytes_per_sec": 0, 00:25:29.139 "rw_ios_per_sec": 0, 00:25:29.139 "rw_mbytes_per_sec": 0, 00:25:29.139 "w_mbytes_per_sec": 0 00:25:29.139 }, 00:25:29.139 "block_size": 512, 00:25:29.139 "claim_type": "exclusive_write", 00:25:29.139 "claimed": true, 00:25:29.139 "driver_specific": {}, 00:25:29.139 "memory_domains": [ 00:25:29.139 { 00:25:29.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:29.139 "dma_device_type": 2 00:25:29.139 } 00:25:29.139 ], 00:25:29.139 "name": "Malloc3", 00:25:29.139 "num_blocks": 16384, 00:25:29.139 "product_name": "Malloc disk", 00:25:29.139 "supported_io_types": { 00:25:29.139 "abort": true, 00:25:29.139 "compare": false, 00:25:29.139 "compare_and_write": false, 00:25:29.139 "flush": true, 00:25:29.139 "nvme_admin": false, 00:25:29.139 "nvme_io": false, 00:25:29.139 "read": true, 00:25:29.139 "reset": true, 00:25:29.139 "unmap": true, 00:25:29.139 "write": true, 00:25:29.139 "write_zeroes": true 00:25:29.139 }, 00:25:29.139 "uuid": "947c9235-cc48-4913-b9e4-1c58375a9ee3", 00:25:29.139 "zoned": false 00:25:29.139 }, 00:25:29.139 { 00:25:29.139 "aliases": [ 00:25:29.139 "e15a2b76-127f-5414-9bf3-4359cfa56e54" 00:25:29.139 ], 00:25:29.139 "assigned_rate_limits": { 00:25:29.139 "r_mbytes_per_sec": 0, 00:25:29.139 "rw_ios_per_sec": 0, 00:25:29.139 "rw_mbytes_per_sec": 0, 00:25:29.139 "w_mbytes_per_sec": 0 00:25:29.139 }, 00:25:29.139 "block_size": 512, 00:25:29.139 "claimed": false, 00:25:29.139 "driver_specific": { 00:25:29.139 "passthru": { 00:25:29.139 "base_bdev_name": "Malloc3", 00:25:29.139 "name": "Passthru0" 00:25:29.139 } 00:25:29.139 }, 00:25:29.139 "memory_domains": [ 00:25:29.139 { 00:25:29.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:29.139 "dma_device_type": 2 00:25:29.139 } 00:25:29.139 ], 00:25:29.139 "name": "Passthru0", 00:25:29.139 "num_blocks": 16384, 00:25:29.139 "product_name": "passthru", 00:25:29.139 "supported_io_types": { 00:25:29.139 "abort": true, 00:25:29.139 "compare": false, 00:25:29.139 "compare_and_write": false, 00:25:29.139 "flush": true, 00:25:29.139 "nvme_admin": false, 00:25:29.139 "nvme_io": false, 00:25:29.139 "read": true, 00:25:29.139 "reset": true, 00:25:29.139 "unmap": true, 00:25:29.139 "write": true, 00:25:29.139 "write_zeroes": true 00:25:29.139 }, 00:25:29.139 "uuid": "e15a2b76-127f-5414-9bf3-4359cfa56e54", 00:25:29.139 "zoned": false 00:25:29.140 } 00:25:29.140 ]' 00:25:29.140 08:22:02 -- rpc/rpc.sh@21 -- # jq length 00:25:29.140 08:22:02 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:25:29.140 08:22:02 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:25:29.140 08:22:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:29.140 08:22:02 -- common/autotest_common.sh@10 -- # set +x 00:25:29.140 08:22:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:29.140 08:22:02 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:25:29.140 08:22:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:29.140 08:22:02 -- common/autotest_common.sh@10 -- # set +x 00:25:29.140 08:22:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:29.140 08:22:02 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:25:29.140 08:22:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:29.140 08:22:02 -- common/autotest_common.sh@10 -- # set +x 00:25:29.140 08:22:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:29.140 08:22:02 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:25:29.140 08:22:02 -- rpc/rpc.sh@26 -- # jq length 00:25:29.140 ************************************ 00:25:29.140 END TEST rpc_daemon_integrity 00:25:29.140 ************************************ 00:25:29.140 08:22:02 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:25:29.140 00:25:29.140 real 0m0.295s 00:25:29.140 user 0m0.189s 00:25:29.140 sys 0m0.045s 00:25:29.140 08:22:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:29.140 08:22:02 -- common/autotest_common.sh@10 -- # set +x 00:25:29.140 08:22:02 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:29.140 08:22:02 -- rpc/rpc.sh@84 -- # killprocess 55797 00:25:29.140 08:22:02 -- common/autotest_common.sh@926 -- # '[' -z 55797 ']' 00:25:29.140 08:22:02 -- common/autotest_common.sh@930 -- # kill -0 55797 00:25:29.140 08:22:02 -- common/autotest_common.sh@931 -- # uname 00:25:29.140 08:22:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:29.140 08:22:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55797 00:25:29.140 killing process with pid 55797 00:25:29.140 08:22:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:29.140 08:22:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:29.140 08:22:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55797' 00:25:29.140 08:22:02 -- common/autotest_common.sh@945 -- # kill 55797 00:25:29.140 08:22:02 -- common/autotest_common.sh@950 -- # wait 55797 00:25:29.706 00:25:29.706 real 0m2.976s 00:25:29.706 user 0m3.819s 00:25:29.706 sys 0m0.816s 00:25:29.706 08:22:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:29.706 08:22:02 -- common/autotest_common.sh@10 -- # set +x 00:25:29.706 ************************************ 00:25:29.706 END TEST rpc 00:25:29.706 ************************************ 00:25:29.706 08:22:02 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:25:29.706 08:22:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:29.706 08:22:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:29.706 08:22:02 -- common/autotest_common.sh@10 -- # set +x 00:25:29.706 ************************************ 00:25:29.706 START TEST rpc_client 00:25:29.706 ************************************ 00:25:29.706 08:22:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:25:29.706 * Looking for test storage... 00:25:29.706 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:25:29.706 08:22:02 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:25:29.706 OK 00:25:29.706 08:22:02 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:25:29.706 00:25:29.706 real 0m0.145s 00:25:29.706 user 0m0.060s 00:25:29.706 sys 0m0.095s 00:25:29.706 08:22:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:29.706 08:22:02 -- common/autotest_common.sh@10 -- # set +x 00:25:29.706 ************************************ 00:25:29.706 END TEST rpc_client 00:25:29.706 ************************************ 00:25:29.965 08:22:03 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:25:29.965 08:22:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:29.965 08:22:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:29.965 08:22:03 -- common/autotest_common.sh@10 -- # set +x 00:25:29.965 ************************************ 00:25:29.965 START TEST json_config 00:25:29.965 ************************************ 00:25:29.965 08:22:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:25:29.965 08:22:03 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:29.965 08:22:03 -- nvmf/common.sh@7 -- # uname -s 00:25:29.965 08:22:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:29.965 08:22:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:29.965 08:22:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:29.965 08:22:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:29.965 08:22:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:29.965 08:22:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:29.965 08:22:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:29.965 08:22:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:29.965 08:22:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:29.965 08:22:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:29.965 08:22:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:25:29.965 08:22:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:25:29.965 08:22:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:29.965 08:22:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:29.965 08:22:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:25:29.965 08:22:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:29.965 08:22:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:29.965 08:22:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:29.965 08:22:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:29.965 08:22:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.965 08:22:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.965 08:22:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.965 08:22:03 -- paths/export.sh@5 -- # export PATH 00:25:29.965 08:22:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.965 08:22:03 -- nvmf/common.sh@46 -- # : 0 00:25:29.965 08:22:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:29.965 08:22:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:29.965 08:22:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:29.965 08:22:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:29.965 08:22:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:29.965 08:22:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:29.965 08:22:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:29.965 08:22:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:29.965 08:22:03 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:25:29.965 08:22:03 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:25:29.965 08:22:03 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:25:29.965 08:22:03 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:25:29.965 08:22:03 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:25:29.965 08:22:03 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:25:29.965 08:22:03 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:25:29.965 08:22:03 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:25:29.965 08:22:03 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:25:29.965 08:22:03 -- json_config/json_config.sh@32 -- # declare -A app_params 00:25:29.965 08:22:03 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:25:29.965 08:22:03 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:25:29.965 08:22:03 -- json_config/json_config.sh@43 -- # last_event_id=0 00:25:29.965 08:22:03 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:25:29.965 INFO: JSON configuration test init 00:25:29.965 08:22:03 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:25:29.965 08:22:03 -- json_config/json_config.sh@420 -- # json_config_test_init 00:25:29.965 08:22:03 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:25:29.965 08:22:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:29.965 08:22:03 -- common/autotest_common.sh@10 -- # set +x 00:25:29.965 08:22:03 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:25:29.965 08:22:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:29.965 08:22:03 -- common/autotest_common.sh@10 -- # set +x 00:25:29.965 08:22:03 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:25:29.966 08:22:03 -- json_config/json_config.sh@98 -- # local app=target 00:25:29.966 08:22:03 -- json_config/json_config.sh@99 -- # shift 00:25:29.966 08:22:03 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:25:29.966 08:22:03 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:25:29.966 08:22:03 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:25:29.966 08:22:03 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:25:29.966 08:22:03 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:25:29.966 08:22:03 -- json_config/json_config.sh@111 -- # app_pid[$app]=56097 00:25:29.966 Waiting for target to run... 00:25:29.966 08:22:03 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:25:29.966 08:22:03 -- json_config/json_config.sh@114 -- # waitforlisten 56097 /var/tmp/spdk_tgt.sock 00:25:29.966 08:22:03 -- common/autotest_common.sh@819 -- # '[' -z 56097 ']' 00:25:29.966 08:22:03 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:25:29.966 08:22:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:25:29.966 08:22:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:29.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:25:29.966 08:22:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:25:29.966 08:22:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:29.966 08:22:03 -- common/autotest_common.sh@10 -- # set +x 00:25:29.966 [2024-04-17 08:22:03.247680] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:29.966 [2024-04-17 08:22:03.247767] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56097 ] 00:25:30.531 [2024-04-17 08:22:03.608718] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.531 [2024-04-17 08:22:03.692982] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:30.531 [2024-04-17 08:22:03.693128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.117 08:22:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:31.117 08:22:04 -- common/autotest_common.sh@852 -- # return 0 00:25:31.117 08:22:04 -- json_config/json_config.sh@115 -- # echo '' 00:25:31.117 00:25:31.117 08:22:04 -- json_config/json_config.sh@322 -- # create_accel_config 00:25:31.117 08:22:04 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:25:31.117 08:22:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:31.117 08:22:04 -- common/autotest_common.sh@10 -- # set +x 00:25:31.117 08:22:04 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:25:31.117 08:22:04 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:25:31.117 08:22:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:31.117 08:22:04 -- common/autotest_common.sh@10 -- # set +x 00:25:31.117 08:22:04 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:25:31.117 08:22:04 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:25:31.117 08:22:04 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:25:31.375 08:22:04 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:25:31.375 08:22:04 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:25:31.375 08:22:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:31.375 08:22:04 -- common/autotest_common.sh@10 -- # set +x 00:25:31.375 08:22:04 -- json_config/json_config.sh@48 -- # local ret=0 00:25:31.375 08:22:04 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:25:31.375 08:22:04 -- json_config/json_config.sh@49 -- # local enabled_types 00:25:31.375 08:22:04 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:25:31.375 08:22:04 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:25:31.375 08:22:04 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:25:31.634 08:22:04 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:25:31.634 08:22:04 -- json_config/json_config.sh@51 -- # local get_types 00:25:31.634 08:22:04 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:25:31.634 08:22:04 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:25:31.634 08:22:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:31.634 08:22:04 -- common/autotest_common.sh@10 -- # set +x 00:25:31.634 08:22:04 -- json_config/json_config.sh@58 -- # return 0 00:25:31.634 08:22:04 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:25:31.634 08:22:04 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:25:31.634 08:22:04 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:25:31.634 08:22:04 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:25:31.634 08:22:04 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:25:31.634 08:22:04 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:25:31.634 08:22:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:31.634 08:22:04 -- common/autotest_common.sh@10 -- # set +x 00:25:31.634 08:22:04 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:25:31.634 08:22:04 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:25:31.634 08:22:04 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:25:31.634 08:22:04 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:25:31.634 08:22:04 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:25:31.892 MallocForNvmf0 00:25:31.892 08:22:05 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:25:31.892 08:22:05 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:25:32.151 MallocForNvmf1 00:25:32.151 08:22:05 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:25:32.151 08:22:05 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:25:32.151 [2024-04-17 08:22:05.402807] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:32.151 08:22:05 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:32.151 08:22:05 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:32.409 08:22:05 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:25:32.409 08:22:05 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:25:32.667 08:22:05 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:25:32.667 08:22:05 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:25:32.936 08:22:06 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:25:32.936 08:22:06 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:25:32.936 [2024-04-17 08:22:06.169869] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:32.936 08:22:06 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:25:32.936 08:22:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:32.936 08:22:06 -- common/autotest_common.sh@10 -- # set +x 00:25:32.936 08:22:06 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:25:32.936 08:22:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:32.936 08:22:06 -- common/autotest_common.sh@10 -- # set +x 00:25:33.222 08:22:06 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:25:33.222 08:22:06 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:25:33.222 08:22:06 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:25:33.222 MallocBdevForConfigChangeCheck 00:25:33.222 08:22:06 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:25:33.222 08:22:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:33.222 08:22:06 -- common/autotest_common.sh@10 -- # set +x 00:25:33.222 08:22:06 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:25:33.222 08:22:06 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:25:33.788 INFO: shutting down applications... 00:25:33.788 08:22:06 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:25:33.788 08:22:06 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:25:33.788 08:22:06 -- json_config/json_config.sh@431 -- # json_config_clear target 00:25:33.788 08:22:06 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:25:33.788 08:22:06 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:25:34.046 Calling clear_iscsi_subsystem 00:25:34.046 Calling clear_nvmf_subsystem 00:25:34.046 Calling clear_nbd_subsystem 00:25:34.046 Calling clear_ublk_subsystem 00:25:34.046 Calling clear_vhost_blk_subsystem 00:25:34.046 Calling clear_vhost_scsi_subsystem 00:25:34.046 Calling clear_scheduler_subsystem 00:25:34.046 Calling clear_bdev_subsystem 00:25:34.046 Calling clear_accel_subsystem 00:25:34.046 Calling clear_vmd_subsystem 00:25:34.046 Calling clear_sock_subsystem 00:25:34.046 Calling clear_iobuf_subsystem 00:25:34.046 08:22:07 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:25:34.046 08:22:07 -- json_config/json_config.sh@396 -- # count=100 00:25:34.046 08:22:07 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:25:34.046 08:22:07 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:25:34.046 08:22:07 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:25:34.046 08:22:07 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:25:34.304 08:22:07 -- json_config/json_config.sh@398 -- # break 00:25:34.304 08:22:07 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:25:34.304 08:22:07 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:25:34.304 08:22:07 -- json_config/json_config.sh@120 -- # local app=target 00:25:34.304 08:22:07 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:25:34.304 08:22:07 -- json_config/json_config.sh@124 -- # [[ -n 56097 ]] 00:25:34.304 08:22:07 -- json_config/json_config.sh@127 -- # kill -SIGINT 56097 00:25:34.304 08:22:07 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:25:34.304 08:22:07 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:25:34.304 08:22:07 -- json_config/json_config.sh@130 -- # kill -0 56097 00:25:34.304 08:22:07 -- json_config/json_config.sh@134 -- # sleep 0.5 00:25:34.871 08:22:08 -- json_config/json_config.sh@129 -- # (( i++ )) 00:25:34.871 08:22:08 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:25:34.871 08:22:08 -- json_config/json_config.sh@130 -- # kill -0 56097 00:25:34.871 08:22:08 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:25:34.871 08:22:08 -- json_config/json_config.sh@132 -- # break 00:25:34.871 08:22:08 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:25:34.871 SPDK target shutdown done 00:25:34.871 08:22:08 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:25:34.871 INFO: relaunching applications... 00:25:34.871 08:22:08 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:25:34.871 08:22:08 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:25:34.871 08:22:08 -- json_config/json_config.sh@98 -- # local app=target 00:25:34.871 08:22:08 -- json_config/json_config.sh@99 -- # shift 00:25:34.871 08:22:08 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:25:34.871 08:22:08 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:25:34.871 08:22:08 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:25:34.871 08:22:08 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:25:34.871 08:22:08 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:25:34.871 08:22:08 -- json_config/json_config.sh@111 -- # app_pid[$app]=56365 00:25:34.871 Waiting for target to run... 00:25:34.871 08:22:08 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:25:34.871 08:22:08 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:25:34.871 08:22:08 -- json_config/json_config.sh@114 -- # waitforlisten 56365 /var/tmp/spdk_tgt.sock 00:25:34.871 08:22:08 -- common/autotest_common.sh@819 -- # '[' -z 56365 ']' 00:25:34.871 08:22:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:25:34.871 08:22:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:34.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:25:34.871 08:22:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:25:34.871 08:22:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:34.871 08:22:08 -- common/autotest_common.sh@10 -- # set +x 00:25:34.871 [2024-04-17 08:22:08.129024] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:34.871 [2024-04-17 08:22:08.129110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56365 ] 00:25:35.437 [2024-04-17 08:22:08.483579] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.437 [2024-04-17 08:22:08.565074] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:35.437 [2024-04-17 08:22:08.565219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.696 [2024-04-17 08:22:08.868211] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:35.696 [2024-04-17 08:22:08.900193] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:35.696 08:22:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:35.696 08:22:08 -- common/autotest_common.sh@852 -- # return 0 00:25:35.696 00:25:35.696 08:22:08 -- json_config/json_config.sh@115 -- # echo '' 00:25:35.696 08:22:08 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:25:35.696 INFO: Checking if target configuration is the same... 00:25:35.696 08:22:08 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:25:35.696 08:22:08 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:25:35.696 08:22:08 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:25:35.696 08:22:08 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:25:35.696 + '[' 2 -ne 2 ']' 00:25:35.696 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:25:35.696 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:25:35.696 + rootdir=/home/vagrant/spdk_repo/spdk 00:25:35.696 +++ basename /dev/fd/62 00:25:35.696 ++ mktemp /tmp/62.XXX 00:25:35.696 + tmp_file_1=/tmp/62.tGr 00:25:35.696 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:25:35.954 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:25:35.954 + tmp_file_2=/tmp/spdk_tgt_config.json.bHG 00:25:35.954 + ret=0 00:25:35.954 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:25:36.213 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:25:36.213 + diff -u /tmp/62.tGr /tmp/spdk_tgt_config.json.bHG 00:25:36.213 INFO: JSON config files are the same 00:25:36.213 + echo 'INFO: JSON config files are the same' 00:25:36.213 + rm /tmp/62.tGr /tmp/spdk_tgt_config.json.bHG 00:25:36.213 + exit 0 00:25:36.213 08:22:09 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:25:36.213 INFO: changing configuration and checking if this can be detected... 00:25:36.213 08:22:09 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:25:36.213 08:22:09 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:25:36.213 08:22:09 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:25:36.472 08:22:09 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:25:36.472 08:22:09 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:25:36.472 08:22:09 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:25:36.472 + '[' 2 -ne 2 ']' 00:25:36.472 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:25:36.472 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:25:36.472 + rootdir=/home/vagrant/spdk_repo/spdk 00:25:36.472 +++ basename /dev/fd/62 00:25:36.472 ++ mktemp /tmp/62.XXX 00:25:36.472 + tmp_file_1=/tmp/62.fuL 00:25:36.472 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:25:36.472 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:25:36.472 + tmp_file_2=/tmp/spdk_tgt_config.json.Loh 00:25:36.472 + ret=0 00:25:36.472 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:25:36.731 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:25:36.731 + diff -u /tmp/62.fuL /tmp/spdk_tgt_config.json.Loh 00:25:36.731 + ret=1 00:25:36.731 + echo '=== Start of file: /tmp/62.fuL ===' 00:25:36.731 + cat /tmp/62.fuL 00:25:36.731 + echo '=== End of file: /tmp/62.fuL ===' 00:25:36.731 + echo '' 00:25:36.731 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Loh ===' 00:25:36.731 + cat /tmp/spdk_tgt_config.json.Loh 00:25:36.732 + echo '=== End of file: /tmp/spdk_tgt_config.json.Loh ===' 00:25:36.732 + echo '' 00:25:36.732 + rm /tmp/62.fuL /tmp/spdk_tgt_config.json.Loh 00:25:36.732 + exit 1 00:25:36.732 INFO: configuration change detected. 00:25:36.732 08:22:10 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:25:36.732 08:22:10 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:25:36.732 08:22:10 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:25:36.732 08:22:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:36.732 08:22:10 -- common/autotest_common.sh@10 -- # set +x 00:25:36.732 08:22:10 -- json_config/json_config.sh@360 -- # local ret=0 00:25:36.732 08:22:10 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:25:36.732 08:22:10 -- json_config/json_config.sh@370 -- # [[ -n 56365 ]] 00:25:36.732 08:22:10 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:25:36.732 08:22:10 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:25:36.732 08:22:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:36.732 08:22:10 -- common/autotest_common.sh@10 -- # set +x 00:25:36.991 08:22:10 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:25:36.991 08:22:10 -- json_config/json_config.sh@246 -- # uname -s 00:25:36.991 08:22:10 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:25:36.991 08:22:10 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:25:36.991 08:22:10 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:25:36.991 08:22:10 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:25:36.991 08:22:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:36.991 08:22:10 -- common/autotest_common.sh@10 -- # set +x 00:25:36.991 08:22:10 -- json_config/json_config.sh@376 -- # killprocess 56365 00:25:36.991 08:22:10 -- common/autotest_common.sh@926 -- # '[' -z 56365 ']' 00:25:36.991 08:22:10 -- common/autotest_common.sh@930 -- # kill -0 56365 00:25:36.991 08:22:10 -- common/autotest_common.sh@931 -- # uname 00:25:36.991 08:22:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:36.991 08:22:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56365 00:25:36.991 killing process with pid 56365 00:25:36.991 08:22:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:36.991 08:22:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:36.991 08:22:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56365' 00:25:36.991 08:22:10 -- common/autotest_common.sh@945 -- # kill 56365 00:25:36.991 08:22:10 -- common/autotest_common.sh@950 -- # wait 56365 00:25:37.250 08:22:10 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:25:37.250 08:22:10 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:25:37.250 08:22:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:37.250 08:22:10 -- common/autotest_common.sh@10 -- # set +x 00:25:37.250 INFO: Success 00:25:37.250 08:22:10 -- json_config/json_config.sh@381 -- # return 0 00:25:37.250 08:22:10 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:25:37.250 ************************************ 00:25:37.250 END TEST json_config 00:25:37.250 ************************************ 00:25:37.250 00:25:37.250 real 0m7.397s 00:25:37.250 user 0m10.120s 00:25:37.250 sys 0m1.873s 00:25:37.250 08:22:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:37.250 08:22:10 -- common/autotest_common.sh@10 -- # set +x 00:25:37.250 08:22:10 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:25:37.250 08:22:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:37.250 08:22:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:37.250 08:22:10 -- common/autotest_common.sh@10 -- # set +x 00:25:37.250 ************************************ 00:25:37.250 START TEST json_config_extra_key 00:25:37.250 ************************************ 00:25:37.250 08:22:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:25:37.510 08:22:10 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:37.510 08:22:10 -- nvmf/common.sh@7 -- # uname -s 00:25:37.510 08:22:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:37.510 08:22:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:37.510 08:22:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:37.510 08:22:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:37.510 08:22:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:37.510 08:22:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:37.510 08:22:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:37.510 08:22:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:37.510 08:22:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:37.510 08:22:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:37.510 08:22:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:25:37.510 08:22:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:25:37.510 08:22:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:37.510 08:22:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:37.510 08:22:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:25:37.510 08:22:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:37.510 08:22:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:37.510 08:22:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:37.510 08:22:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:37.510 08:22:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.510 08:22:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.510 08:22:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.510 08:22:10 -- paths/export.sh@5 -- # export PATH 00:25:37.510 08:22:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.510 08:22:10 -- nvmf/common.sh@46 -- # : 0 00:25:37.510 08:22:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:37.510 08:22:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:37.510 08:22:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:37.510 08:22:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:37.510 08:22:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:37.510 08:22:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:37.510 08:22:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:37.510 08:22:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:37.510 08:22:10 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:25:37.510 08:22:10 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:25:37.510 08:22:10 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:25:37.510 08:22:10 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:25:37.510 08:22:10 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:25:37.510 08:22:10 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:25:37.510 08:22:10 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:25:37.510 08:22:10 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:25:37.510 08:22:10 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:25:37.510 08:22:10 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:25:37.510 INFO: launching applications... 00:25:37.510 08:22:10 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:25:37.510 08:22:10 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:25:37.510 08:22:10 -- json_config/json_config_extra_key.sh@25 -- # shift 00:25:37.510 08:22:10 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:25:37.510 08:22:10 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:25:37.510 08:22:10 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=56534 00:25:37.510 08:22:10 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:25:37.510 Waiting for target to run... 00:25:37.510 08:22:10 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 56534 /var/tmp/spdk_tgt.sock 00:25:37.510 08:22:10 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:25:37.510 08:22:10 -- common/autotest_common.sh@819 -- # '[' -z 56534 ']' 00:25:37.510 08:22:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:25:37.510 08:22:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:37.510 08:22:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:25:37.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:25:37.510 08:22:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:37.510 08:22:10 -- common/autotest_common.sh@10 -- # set +x 00:25:37.510 [2024-04-17 08:22:10.683312] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:37.510 [2024-04-17 08:22:10.683494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56534 ] 00:25:37.770 [2024-04-17 08:22:11.023368] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.028 [2024-04-17 08:22:11.104131] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:38.028 [2024-04-17 08:22:11.104262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.287 00:25:38.287 INFO: shutting down applications... 00:25:38.287 08:22:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:38.287 08:22:11 -- common/autotest_common.sh@852 -- # return 0 00:25:38.287 08:22:11 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:25:38.287 08:22:11 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:25:38.287 08:22:11 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:25:38.287 08:22:11 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:25:38.287 08:22:11 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:25:38.287 08:22:11 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 56534 ]] 00:25:38.287 08:22:11 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 56534 00:25:38.287 08:22:11 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:25:38.287 08:22:11 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:25:38.287 08:22:11 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56534 00:25:38.287 08:22:11 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:25:38.856 08:22:12 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:25:38.856 08:22:12 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:25:38.856 08:22:12 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56534 00:25:38.856 08:22:12 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:25:38.856 08:22:12 -- json_config/json_config_extra_key.sh@52 -- # break 00:25:38.856 08:22:12 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:25:38.856 08:22:12 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:25:38.856 SPDK target shutdown done 00:25:38.856 08:22:12 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:25:38.856 Success 00:25:38.856 00:25:38.856 real 0m1.524s 00:25:38.856 user 0m1.309s 00:25:38.856 sys 0m0.373s 00:25:38.856 08:22:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:38.856 08:22:12 -- common/autotest_common.sh@10 -- # set +x 00:25:38.856 ************************************ 00:25:38.856 END TEST json_config_extra_key 00:25:38.856 ************************************ 00:25:38.856 08:22:12 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:25:38.856 08:22:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:38.856 08:22:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:38.856 08:22:12 -- common/autotest_common.sh@10 -- # set +x 00:25:38.856 ************************************ 00:25:38.856 START TEST alias_rpc 00:25:38.856 ************************************ 00:25:38.856 08:22:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:25:39.115 * Looking for test storage... 00:25:39.115 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:25:39.115 08:22:12 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:25:39.115 08:22:12 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:39.115 08:22:12 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=56609 00:25:39.115 08:22:12 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 56609 00:25:39.115 08:22:12 -- common/autotest_common.sh@819 -- # '[' -z 56609 ']' 00:25:39.115 08:22:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:39.115 08:22:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:39.115 08:22:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:39.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:39.115 08:22:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:39.115 08:22:12 -- common/autotest_common.sh@10 -- # set +x 00:25:39.115 [2024-04-17 08:22:12.275478] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:39.115 [2024-04-17 08:22:12.275667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56609 ] 00:25:39.115 [2024-04-17 08:22:12.416306] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.374 [2024-04-17 08:22:12.520597] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:39.374 [2024-04-17 08:22:12.520850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.963 08:22:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:39.963 08:22:13 -- common/autotest_common.sh@852 -- # return 0 00:25:39.963 08:22:13 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:25:40.226 08:22:13 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 56609 00:25:40.226 08:22:13 -- common/autotest_common.sh@926 -- # '[' -z 56609 ']' 00:25:40.226 08:22:13 -- common/autotest_common.sh@930 -- # kill -0 56609 00:25:40.226 08:22:13 -- common/autotest_common.sh@931 -- # uname 00:25:40.226 08:22:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:40.226 08:22:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56609 00:25:40.226 killing process with pid 56609 00:25:40.227 08:22:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:40.227 08:22:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:40.227 08:22:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56609' 00:25:40.227 08:22:13 -- common/autotest_common.sh@945 -- # kill 56609 00:25:40.227 08:22:13 -- common/autotest_common.sh@950 -- # wait 56609 00:25:40.485 ************************************ 00:25:40.485 END TEST alias_rpc 00:25:40.485 ************************************ 00:25:40.485 00:25:40.485 real 0m1.632s 00:25:40.485 user 0m1.760s 00:25:40.485 sys 0m0.389s 00:25:40.485 08:22:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:40.485 08:22:13 -- common/autotest_common.sh@10 -- # set +x 00:25:40.485 08:22:13 -- spdk/autotest.sh@182 -- # [[ 1 -eq 0 ]] 00:25:40.485 08:22:13 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:25:40.485 08:22:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:40.485 08:22:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:40.485 08:22:13 -- common/autotest_common.sh@10 -- # set +x 00:25:40.485 ************************************ 00:25:40.485 START TEST dpdk_mem_utility 00:25:40.485 ************************************ 00:25:40.485 08:22:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:25:40.745 * Looking for test storage... 00:25:40.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:25:40.745 08:22:13 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:25:40.745 08:22:13 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:40.745 08:22:13 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=56696 00:25:40.745 08:22:13 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 56696 00:25:40.745 08:22:13 -- common/autotest_common.sh@819 -- # '[' -z 56696 ']' 00:25:40.745 08:22:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.745 08:22:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:40.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.745 08:22:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.745 08:22:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:40.745 08:22:13 -- common/autotest_common.sh@10 -- # set +x 00:25:40.745 [2024-04-17 08:22:13.968985] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:40.745 [2024-04-17 08:22:13.969071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56696 ] 00:25:41.004 [2024-04-17 08:22:14.106216] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.004 [2024-04-17 08:22:14.208866] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:41.004 [2024-04-17 08:22:14.208994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.571 08:22:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:41.571 08:22:14 -- common/autotest_common.sh@852 -- # return 0 00:25:41.571 08:22:14 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:25:41.571 08:22:14 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:25:41.571 08:22:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.571 08:22:14 -- common/autotest_common.sh@10 -- # set +x 00:25:41.571 { 00:25:41.571 "filename": "/tmp/spdk_mem_dump.txt" 00:25:41.571 } 00:25:41.571 08:22:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.571 08:22:14 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:25:41.831 DPDK memory size 814.000000 MiB in 1 heap(s) 00:25:41.831 1 heaps totaling size 814.000000 MiB 00:25:41.831 size: 814.000000 MiB heap id: 0 00:25:41.831 end heaps---------- 00:25:41.831 8 mempools totaling size 598.116089 MiB 00:25:41.831 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:25:41.831 size: 158.602051 MiB name: PDU_data_out_Pool 00:25:41.831 size: 84.521057 MiB name: bdev_io_56696 00:25:41.831 size: 51.011292 MiB name: evtpool_56696 00:25:41.831 size: 50.003479 MiB name: msgpool_56696 00:25:41.831 size: 21.763794 MiB name: PDU_Pool 00:25:41.831 size: 19.513306 MiB name: SCSI_TASK_Pool 00:25:41.831 size: 0.026123 MiB name: Session_Pool 00:25:41.831 end mempools------- 00:25:41.831 6 memzones totaling size 4.142822 MiB 00:25:41.831 size: 1.000366 MiB name: RG_ring_0_56696 00:25:41.831 size: 1.000366 MiB name: RG_ring_1_56696 00:25:41.831 size: 1.000366 MiB name: RG_ring_4_56696 00:25:41.831 size: 1.000366 MiB name: RG_ring_5_56696 00:25:41.832 size: 0.125366 MiB name: RG_ring_2_56696 00:25:41.832 size: 0.015991 MiB name: RG_ring_3_56696 00:25:41.832 end memzones------- 00:25:41.832 08:22:14 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:25:41.832 heap id: 0 total size: 814.000000 MiB number of busy elements: 221 number of free elements: 15 00:25:41.832 list of free elements. size: 12.486389 MiB 00:25:41.832 element at address: 0x200000400000 with size: 1.999512 MiB 00:25:41.832 element at address: 0x200018e00000 with size: 0.999878 MiB 00:25:41.832 element at address: 0x200019000000 with size: 0.999878 MiB 00:25:41.832 element at address: 0x200003e00000 with size: 0.996277 MiB 00:25:41.832 element at address: 0x200031c00000 with size: 0.994446 MiB 00:25:41.832 element at address: 0x200013800000 with size: 0.978699 MiB 00:25:41.832 element at address: 0x200007000000 with size: 0.959839 MiB 00:25:41.832 element at address: 0x200019200000 with size: 0.936584 MiB 00:25:41.832 element at address: 0x200000200000 with size: 0.837219 MiB 00:25:41.832 element at address: 0x20001aa00000 with size: 0.571899 MiB 00:25:41.832 element at address: 0x20000b200000 with size: 0.489258 MiB 00:25:41.832 element at address: 0x200000800000 with size: 0.486877 MiB 00:25:41.832 element at address: 0x200019400000 with size: 0.485657 MiB 00:25:41.832 element at address: 0x200027e00000 with size: 0.398682 MiB 00:25:41.832 element at address: 0x200003a00000 with size: 0.351685 MiB 00:25:41.832 list of standard malloc elements. size: 199.251038 MiB 00:25:41.832 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:25:41.832 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:25:41.832 element at address: 0x200018efff80 with size: 1.000122 MiB 00:25:41.832 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:25:41.832 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:25:41.832 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:25:41.832 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:25:41.832 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:25:41.832 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:25:41.832 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:25:41.832 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:25:41.832 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:25:41.832 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:25:41.832 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:25:41.832 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:25:41.832 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:25:41.832 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:25:41.832 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:25:41.832 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:25:41.832 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:25:41.832 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:25:41.832 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:25:41.832 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:25:41.832 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:25:41.832 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:25:41.832 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:25:41.832 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:25:41.832 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:25:41.832 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:25:41.832 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:25:41.832 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:25:41.832 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:25:41.832 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:25:41.832 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:25:41.832 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:25:41.832 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:25:41.832 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:25:41.832 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:25:41.832 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:25:41.832 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:25:41.832 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:25:41.832 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:25:41.832 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:25:41.832 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:25:41.832 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:25:41.832 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:25:41.832 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:25:41.832 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:25:41.832 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:25:41.832 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:25:41.832 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:25:41.832 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:25:41.832 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:25:41.832 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:25:41.832 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:25:41.832 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:25:41.832 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:25:41.832 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:25:41.832 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:25:41.832 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:25:41.832 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:25:41.832 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:25:41.832 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:25:41.832 element at address: 0x200003adb300 with size: 0.000183 MiB 00:25:41.832 element at address: 0x200003adb500 with size: 0.000183 MiB 00:25:41.832 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:25:41.832 element at address: 0x200003affa80 with size: 0.000183 MiB 00:25:41.832 element at address: 0x200003affb40 with size: 0.000183 MiB 00:25:41.832 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:25:41.832 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:25:41.832 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:25:41.832 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:25:41.832 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:25:41.832 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:25:41.832 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:25:41.833 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:25:41.833 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:25:41.833 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:25:41.833 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:25:41.833 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:25:41.833 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:25:41.833 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:25:41.833 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:25:41.833 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:25:41.833 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:25:41.833 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:25:41.833 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:25:41.833 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:25:41.833 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:25:41.833 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:25:41.833 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:25:41.833 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:25:41.833 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:25:41.833 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:25:41.833 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:25:41.833 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:25:41.833 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:25:41.833 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:25:41.833 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:25:41.833 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:25:41.833 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:25:41.833 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:25:41.833 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:25:41.833 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:25:41.833 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:25:41.833 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:25:41.833 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:25:41.833 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:25:41.833 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:25:41.833 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:25:41.833 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e66100 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e661c0 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6cdc0 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:25:41.833 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:25:41.833 list of memzone associated elements. size: 602.262573 MiB 00:25:41.833 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:25:41.833 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:25:41.833 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:25:41.833 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:25:41.833 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:25:41.833 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_56696_0 00:25:41.833 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:25:41.833 associated memzone info: size: 48.002930 MiB name: MP_evtpool_56696_0 00:25:41.833 element at address: 0x200003fff380 with size: 48.003052 MiB 00:25:41.833 associated memzone info: size: 48.002930 MiB name: MP_msgpool_56696_0 00:25:41.833 element at address: 0x2000195be940 with size: 20.255554 MiB 00:25:41.833 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:25:41.833 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:25:41.833 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:25:41.833 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:25:41.833 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_56696 00:25:41.833 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:25:41.833 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_56696 00:25:41.833 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:25:41.833 associated memzone info: size: 1.007996 MiB name: MP_evtpool_56696 00:25:41.833 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:25:41.833 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:25:41.833 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:25:41.833 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:25:41.833 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:25:41.833 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:25:41.833 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:25:41.833 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:25:41.833 element at address: 0x200003eff180 with size: 1.000488 MiB 00:25:41.833 associated memzone info: size: 1.000366 MiB name: RG_ring_0_56696 00:25:41.833 element at address: 0x200003affc00 with size: 1.000488 MiB 00:25:41.833 associated memzone info: size: 1.000366 MiB name: RG_ring_1_56696 00:25:41.833 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:25:41.833 associated memzone info: size: 1.000366 MiB name: RG_ring_4_56696 00:25:41.833 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:25:41.833 associated memzone info: size: 1.000366 MiB name: RG_ring_5_56696 00:25:41.833 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:25:41.833 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_56696 00:25:41.833 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:25:41.833 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:25:41.833 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:25:41.833 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:25:41.834 element at address: 0x20001947c540 with size: 0.250488 MiB 00:25:41.834 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:25:41.834 element at address: 0x200003adf880 with size: 0.125488 MiB 00:25:41.834 associated memzone info: size: 0.125366 MiB name: RG_ring_2_56696 00:25:41.834 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:25:41.834 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:25:41.834 element at address: 0x200027e66280 with size: 0.023743 MiB 00:25:41.834 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:25:41.834 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:25:41.834 associated memzone info: size: 0.015991 MiB name: RG_ring_3_56696 00:25:41.834 element at address: 0x200027e6c3c0 with size: 0.002441 MiB 00:25:41.834 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:25:41.834 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:25:41.834 associated memzone info: size: 0.000183 MiB name: MP_msgpool_56696 00:25:41.834 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:25:41.834 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_56696 00:25:41.834 element at address: 0x200027e6ce80 with size: 0.000305 MiB 00:25:41.834 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:25:41.834 08:22:14 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:25:41.834 08:22:14 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 56696 00:25:41.834 08:22:14 -- common/autotest_common.sh@926 -- # '[' -z 56696 ']' 00:25:41.834 08:22:14 -- common/autotest_common.sh@930 -- # kill -0 56696 00:25:41.834 08:22:14 -- common/autotest_common.sh@931 -- # uname 00:25:41.834 08:22:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:41.834 08:22:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56696 00:25:41.834 killing process with pid 56696 00:25:41.834 08:22:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:41.834 08:22:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:41.834 08:22:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56696' 00:25:41.834 08:22:14 -- common/autotest_common.sh@945 -- # kill 56696 00:25:41.834 08:22:14 -- common/autotest_common.sh@950 -- # wait 56696 00:25:42.093 ************************************ 00:25:42.093 END TEST dpdk_mem_utility 00:25:42.093 ************************************ 00:25:42.093 00:25:42.093 real 0m1.564s 00:25:42.093 user 0m1.628s 00:25:42.093 sys 0m0.395s 00:25:42.093 08:22:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:42.093 08:22:15 -- common/autotest_common.sh@10 -- # set +x 00:25:42.093 08:22:15 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:25:42.093 08:22:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:42.093 08:22:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:42.093 08:22:15 -- common/autotest_common.sh@10 -- # set +x 00:25:42.093 ************************************ 00:25:42.093 START TEST event 00:25:42.093 ************************************ 00:25:42.093 08:22:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:25:42.352 * Looking for test storage... 00:25:42.352 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:25:42.352 08:22:15 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:25:42.352 08:22:15 -- bdev/nbd_common.sh@6 -- # set -e 00:25:42.352 08:22:15 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:25:42.352 08:22:15 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:25:42.352 08:22:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:42.352 08:22:15 -- common/autotest_common.sh@10 -- # set +x 00:25:42.352 ************************************ 00:25:42.352 START TEST event_perf 00:25:42.352 ************************************ 00:25:42.352 08:22:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:25:42.352 Running I/O for 1 seconds...[2024-04-17 08:22:15.562156] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:42.352 [2024-04-17 08:22:15.562240] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56784 ] 00:25:42.611 [2024-04-17 08:22:15.705654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:42.611 [2024-04-17 08:22:15.811915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:42.611 [2024-04-17 08:22:15.812039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:42.611 [2024-04-17 08:22:15.812184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:42.611 [2024-04-17 08:22:15.812185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.990 Running I/O for 1 seconds... 00:25:43.990 lcore 0: 183804 00:25:43.990 lcore 1: 183803 00:25:43.990 lcore 2: 183804 00:25:43.990 lcore 3: 183803 00:25:43.990 done. 00:25:43.990 00:25:43.990 real 0m1.382s 00:25:43.990 user 0m4.199s 00:25:43.990 sys 0m0.058s 00:25:43.990 08:22:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:43.990 08:22:16 -- common/autotest_common.sh@10 -- # set +x 00:25:43.990 ************************************ 00:25:43.990 END TEST event_perf 00:25:43.990 ************************************ 00:25:43.990 08:22:16 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:25:43.990 08:22:16 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:25:43.990 08:22:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:43.990 08:22:16 -- common/autotest_common.sh@10 -- # set +x 00:25:43.990 ************************************ 00:25:43.990 START TEST event_reactor 00:25:43.990 ************************************ 00:25:43.990 08:22:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:25:43.990 [2024-04-17 08:22:17.010472] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:43.990 [2024-04-17 08:22:17.010647] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56823 ] 00:25:43.990 [2024-04-17 08:22:17.152455] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.990 [2024-04-17 08:22:17.255717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.368 test_start 00:25:45.368 oneshot 00:25:45.368 tick 100 00:25:45.368 tick 100 00:25:45.368 tick 250 00:25:45.368 tick 100 00:25:45.368 tick 100 00:25:45.368 tick 100 00:25:45.368 tick 250 00:25:45.368 tick 500 00:25:45.368 tick 100 00:25:45.368 tick 100 00:25:45.368 tick 250 00:25:45.368 tick 100 00:25:45.368 tick 100 00:25:45.368 test_end 00:25:45.368 00:25:45.368 real 0m1.374s 00:25:45.368 user 0m1.211s 00:25:45.368 sys 0m0.056s 00:25:45.368 08:22:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:45.368 08:22:18 -- common/autotest_common.sh@10 -- # set +x 00:25:45.368 ************************************ 00:25:45.368 END TEST event_reactor 00:25:45.368 ************************************ 00:25:45.368 08:22:18 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:25:45.368 08:22:18 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:25:45.368 08:22:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:45.368 08:22:18 -- common/autotest_common.sh@10 -- # set +x 00:25:45.368 ************************************ 00:25:45.368 START TEST event_reactor_perf 00:25:45.368 ************************************ 00:25:45.368 08:22:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:25:45.368 [2024-04-17 08:22:18.449346] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:45.368 [2024-04-17 08:22:18.449614] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56858 ] 00:25:45.368 [2024-04-17 08:22:18.592022] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.368 [2024-04-17 08:22:18.696542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.747 test_start 00:25:46.747 test_end 00:25:46.747 Performance: 449579 events per second 00:25:46.747 00:25:46.747 real 0m1.389s 00:25:46.747 user 0m1.234s 00:25:46.747 sys 0m0.046s 00:25:46.747 08:22:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:46.747 08:22:19 -- common/autotest_common.sh@10 -- # set +x 00:25:46.747 ************************************ 00:25:46.747 END TEST event_reactor_perf 00:25:46.747 ************************************ 00:25:46.747 08:22:19 -- event/event.sh@49 -- # uname -s 00:25:46.747 08:22:19 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:25:46.747 08:22:19 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:25:46.747 08:22:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:46.747 08:22:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:46.747 08:22:19 -- common/autotest_common.sh@10 -- # set +x 00:25:46.747 ************************************ 00:25:46.747 START TEST event_scheduler 00:25:46.747 ************************************ 00:25:46.747 08:22:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:25:46.747 * Looking for test storage... 00:25:46.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:25:46.747 08:22:19 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:25:46.747 08:22:19 -- scheduler/scheduler.sh@35 -- # scheduler_pid=56919 00:25:46.747 08:22:19 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:25:46.747 08:22:19 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:25:46.747 08:22:19 -- scheduler/scheduler.sh@37 -- # waitforlisten 56919 00:25:46.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:46.747 08:22:19 -- common/autotest_common.sh@819 -- # '[' -z 56919 ']' 00:25:46.747 08:22:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:46.747 08:22:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:46.747 08:22:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:46.747 08:22:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:46.747 08:22:19 -- common/autotest_common.sh@10 -- # set +x 00:25:46.747 [2024-04-17 08:22:20.043542] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:46.747 [2024-04-17 08:22:20.043609] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56919 ] 00:25:47.006 [2024-04-17 08:22:20.168148] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:47.006 [2024-04-17 08:22:20.321697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.006 [2024-04-17 08:22:20.321760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:47.006 [2024-04-17 08:22:20.321943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:47.006 [2024-04-17 08:22:20.321948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:47.945 08:22:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:47.945 08:22:20 -- common/autotest_common.sh@852 -- # return 0 00:25:47.945 08:22:20 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:25:47.945 08:22:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:47.945 08:22:20 -- common/autotest_common.sh@10 -- # set +x 00:25:47.945 POWER: Env isn't set yet! 00:25:47.945 POWER: Attempting to initialise ACPI cpufreq power management... 00:25:47.945 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:25:47.945 POWER: Cannot set governor of lcore 0 to userspace 00:25:47.945 POWER: Attempting to initialise PSTAT power management... 00:25:47.945 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:25:47.945 POWER: Cannot set governor of lcore 0 to performance 00:25:47.945 POWER: Attempting to initialise AMD PSTATE power management... 00:25:47.945 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:25:47.945 POWER: Cannot set governor of lcore 0 to userspace 00:25:47.945 POWER: Attempting to initialise CPPC power management... 00:25:47.945 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:25:47.945 POWER: Cannot set governor of lcore 0 to userspace 00:25:47.945 POWER: Attempting to initialise VM power management... 00:25:47.945 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:25:47.945 POWER: Unable to set Power Management Environment for lcore 0 00:25:47.945 [2024-04-17 08:22:20.919190] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:25:47.945 [2024-04-17 08:22:20.919227] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:25:47.945 [2024-04-17 08:22:20.919259] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:25:47.945 08:22:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:47.945 08:22:20 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:25:47.945 08:22:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:47.945 08:22:20 -- common/autotest_common.sh@10 -- # set +x 00:25:47.945 [2024-04-17 08:22:21.053421] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:25:47.945 08:22:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:47.945 08:22:21 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:25:47.945 08:22:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:47.945 08:22:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:47.945 08:22:21 -- common/autotest_common.sh@10 -- # set +x 00:25:47.945 ************************************ 00:25:47.945 START TEST scheduler_create_thread 00:25:47.945 ************************************ 00:25:47.945 08:22:21 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:25:47.945 08:22:21 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:25:47.945 08:22:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:47.945 08:22:21 -- common/autotest_common.sh@10 -- # set +x 00:25:47.945 2 00:25:47.945 08:22:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:47.945 08:22:21 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:25:47.945 08:22:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:47.945 08:22:21 -- common/autotest_common.sh@10 -- # set +x 00:25:47.945 3 00:25:47.945 08:22:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:47.945 08:22:21 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:25:47.945 08:22:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:47.945 08:22:21 -- common/autotest_common.sh@10 -- # set +x 00:25:47.945 4 00:25:47.945 08:22:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:47.945 08:22:21 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:25:47.945 08:22:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:47.945 08:22:21 -- common/autotest_common.sh@10 -- # set +x 00:25:47.945 5 00:25:47.945 08:22:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:47.945 08:22:21 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:25:47.945 08:22:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:47.945 08:22:21 -- common/autotest_common.sh@10 -- # set +x 00:25:47.945 6 00:25:47.945 08:22:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:47.945 08:22:21 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:25:47.945 08:22:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:47.945 08:22:21 -- common/autotest_common.sh@10 -- # set +x 00:25:47.945 7 00:25:47.945 08:22:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:47.945 08:22:21 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:25:47.945 08:22:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:47.945 08:22:21 -- common/autotest_common.sh@10 -- # set +x 00:25:47.945 8 00:25:47.945 08:22:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:47.945 08:22:21 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:25:47.945 08:22:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:47.945 08:22:21 -- common/autotest_common.sh@10 -- # set +x 00:25:47.945 9 00:25:47.945 08:22:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:47.945 08:22:21 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:25:47.945 08:22:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:47.945 08:22:21 -- common/autotest_common.sh@10 -- # set +x 00:25:48.512 10 00:25:48.512 08:22:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:48.512 08:22:21 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:25:48.512 08:22:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:48.512 08:22:21 -- common/autotest_common.sh@10 -- # set +x 00:25:49.884 08:22:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:49.884 08:22:22 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:25:49.884 08:22:22 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:25:49.884 08:22:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:49.884 08:22:22 -- common/autotest_common.sh@10 -- # set +x 00:25:50.462 08:22:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:50.462 08:22:23 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:25:50.462 08:22:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:50.462 08:22:23 -- common/autotest_common.sh@10 -- # set +x 00:25:51.402 08:22:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:51.402 08:22:24 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:25:51.402 08:22:24 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:25:51.402 08:22:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:51.402 08:22:24 -- common/autotest_common.sh@10 -- # set +x 00:25:51.967 ************************************ 00:25:51.967 END TEST scheduler_create_thread 00:25:51.967 ************************************ 00:25:51.967 08:22:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:51.967 00:25:51.967 real 0m4.210s 00:25:51.967 user 0m0.036s 00:25:51.967 sys 0m0.004s 00:25:51.967 08:22:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:51.968 08:22:25 -- common/autotest_common.sh@10 -- # set +x 00:25:52.271 08:22:25 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:52.271 08:22:25 -- scheduler/scheduler.sh@46 -- # killprocess 56919 00:25:52.272 08:22:25 -- common/autotest_common.sh@926 -- # '[' -z 56919 ']' 00:25:52.272 08:22:25 -- common/autotest_common.sh@930 -- # kill -0 56919 00:25:52.272 08:22:25 -- common/autotest_common.sh@931 -- # uname 00:25:52.272 08:22:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:52.272 08:22:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56919 00:25:52.272 killing process with pid 56919 00:25:52.272 08:22:25 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:25:52.272 08:22:25 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:25:52.272 08:22:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56919' 00:25:52.272 08:22:25 -- common/autotest_common.sh@945 -- # kill 56919 00:25:52.272 08:22:25 -- common/autotest_common.sh@950 -- # wait 56919 00:25:52.272 [2024-04-17 08:22:25.557890] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:25:52.532 00:25:52.532 real 0m5.963s 00:25:52.532 user 0m12.777s 00:25:52.532 sys 0m0.431s 00:25:52.532 08:22:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:52.532 08:22:25 -- common/autotest_common.sh@10 -- # set +x 00:25:52.532 ************************************ 00:25:52.532 END TEST event_scheduler 00:25:52.532 ************************************ 00:25:52.793 08:22:25 -- event/event.sh@51 -- # modprobe -n nbd 00:25:52.793 08:22:25 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:25:52.793 08:22:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:52.793 08:22:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:52.793 08:22:25 -- common/autotest_common.sh@10 -- # set +x 00:25:52.793 ************************************ 00:25:52.793 START TEST app_repeat 00:25:52.793 ************************************ 00:25:52.793 08:22:25 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:25:52.793 08:22:25 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:52.793 08:22:25 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:52.793 08:22:25 -- event/event.sh@13 -- # local nbd_list 00:25:52.793 08:22:25 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:25:52.793 08:22:25 -- event/event.sh@14 -- # local bdev_list 00:25:52.793 08:22:25 -- event/event.sh@15 -- # local repeat_times=4 00:25:52.793 08:22:25 -- event/event.sh@17 -- # modprobe nbd 00:25:52.793 08:22:25 -- event/event.sh@19 -- # repeat_pid=57047 00:25:52.793 08:22:25 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:25:52.793 08:22:25 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:25:52.793 08:22:25 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 57047' 00:25:52.793 Process app_repeat pid: 57047 00:25:52.793 spdk_app_start Round 0 00:25:52.793 08:22:25 -- event/event.sh@23 -- # for i in {0..2} 00:25:52.793 08:22:25 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:25:52.793 08:22:25 -- event/event.sh@25 -- # waitforlisten 57047 /var/tmp/spdk-nbd.sock 00:25:52.793 08:22:25 -- common/autotest_common.sh@819 -- # '[' -z 57047 ']' 00:25:52.793 08:22:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:25:52.793 08:22:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:52.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:25:52.793 08:22:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:25:52.793 08:22:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:52.793 08:22:25 -- common/autotest_common.sh@10 -- # set +x 00:25:52.793 [2024-04-17 08:22:25.942958] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:52.793 [2024-04-17 08:22:25.943042] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57047 ] 00:25:52.793 [2024-04-17 08:22:26.080595] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:53.051 [2024-04-17 08:22:26.185205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:53.051 [2024-04-17 08:22:26.185206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.634 08:22:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:53.634 08:22:26 -- common/autotest_common.sh@852 -- # return 0 00:25:53.634 08:22:26 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:25:53.891 Malloc0 00:25:53.891 08:22:27 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:25:54.149 Malloc1 00:25:54.149 08:22:27 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:25:54.149 08:22:27 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:54.149 08:22:27 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:25:54.149 08:22:27 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:25:54.149 08:22:27 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:54.149 08:22:27 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:25:54.149 08:22:27 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:25:54.149 08:22:27 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:54.149 08:22:27 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:25:54.149 08:22:27 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:54.149 08:22:27 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:54.149 08:22:27 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:54.149 08:22:27 -- bdev/nbd_common.sh@12 -- # local i 00:25:54.149 08:22:27 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:54.149 08:22:27 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:54.149 08:22:27 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:25:54.406 /dev/nbd0 00:25:54.406 08:22:27 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:54.406 08:22:27 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:54.406 08:22:27 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:25:54.406 08:22:27 -- common/autotest_common.sh@857 -- # local i 00:25:54.406 08:22:27 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:54.406 08:22:27 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:54.406 08:22:27 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:25:54.406 08:22:27 -- common/autotest_common.sh@861 -- # break 00:25:54.406 08:22:27 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:54.406 08:22:27 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:54.406 08:22:27 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:25:54.406 1+0 records in 00:25:54.406 1+0 records out 00:25:54.406 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427081 s, 9.6 MB/s 00:25:54.406 08:22:27 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:25:54.406 08:22:27 -- common/autotest_common.sh@874 -- # size=4096 00:25:54.406 08:22:27 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:25:54.406 08:22:27 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:54.407 08:22:27 -- common/autotest_common.sh@877 -- # return 0 00:25:54.407 08:22:27 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:54.407 08:22:27 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:54.407 08:22:27 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:25:54.665 /dev/nbd1 00:25:54.665 08:22:27 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:54.665 08:22:27 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:54.665 08:22:27 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:25:54.665 08:22:27 -- common/autotest_common.sh@857 -- # local i 00:25:54.665 08:22:27 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:54.665 08:22:27 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:54.665 08:22:27 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:25:54.665 08:22:27 -- common/autotest_common.sh@861 -- # break 00:25:54.665 08:22:27 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:54.665 08:22:27 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:54.665 08:22:27 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:25:54.665 1+0 records in 00:25:54.665 1+0 records out 00:25:54.665 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000516151 s, 7.9 MB/s 00:25:54.665 08:22:27 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:25:54.665 08:22:27 -- common/autotest_common.sh@874 -- # size=4096 00:25:54.665 08:22:27 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:25:54.665 08:22:27 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:54.665 08:22:27 -- common/autotest_common.sh@877 -- # return 0 00:25:54.665 08:22:27 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:54.665 08:22:27 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:54.665 08:22:27 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:25:54.665 08:22:27 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:54.665 08:22:27 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:54.923 08:22:28 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:25:54.923 { 00:25:54.923 "bdev_name": "Malloc0", 00:25:54.923 "nbd_device": "/dev/nbd0" 00:25:54.923 }, 00:25:54.923 { 00:25:54.923 "bdev_name": "Malloc1", 00:25:54.923 "nbd_device": "/dev/nbd1" 00:25:54.923 } 00:25:54.923 ]' 00:25:54.923 08:22:28 -- bdev/nbd_common.sh@64 -- # echo '[ 00:25:54.923 { 00:25:54.923 "bdev_name": "Malloc0", 00:25:54.923 "nbd_device": "/dev/nbd0" 00:25:54.923 }, 00:25:54.923 { 00:25:54.923 "bdev_name": "Malloc1", 00:25:54.923 "nbd_device": "/dev/nbd1" 00:25:54.923 } 00:25:54.923 ]' 00:25:54.923 08:22:28 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:25:54.923 08:22:28 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:25:54.923 /dev/nbd1' 00:25:54.923 08:22:28 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:25:54.923 /dev/nbd1' 00:25:54.923 08:22:28 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:25:54.923 08:22:28 -- bdev/nbd_common.sh@65 -- # count=2 00:25:54.923 08:22:28 -- bdev/nbd_common.sh@66 -- # echo 2 00:25:54.923 08:22:28 -- bdev/nbd_common.sh@95 -- # count=2 00:25:54.923 08:22:28 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:25:54.923 08:22:28 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:25:54.923 08:22:28 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:54.923 08:22:28 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:25:54.923 08:22:28 -- bdev/nbd_common.sh@71 -- # local operation=write 00:25:54.923 08:22:28 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:25:54.923 08:22:28 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:25:54.923 08:22:28 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:25:55.182 256+0 records in 00:25:55.182 256+0 records out 00:25:55.182 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126305 s, 83.0 MB/s 00:25:55.182 08:22:28 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:55.182 08:22:28 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:25:55.182 256+0 records in 00:25:55.182 256+0 records out 00:25:55.182 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214113 s, 49.0 MB/s 00:25:55.182 08:22:28 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:55.182 08:22:28 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:25:55.182 256+0 records in 00:25:55.182 256+0 records out 00:25:55.182 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247942 s, 42.3 MB/s 00:25:55.182 08:22:28 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:25:55.182 08:22:28 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:55.182 08:22:28 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:25:55.182 08:22:28 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:25:55.182 08:22:28 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:25:55.182 08:22:28 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:25:55.182 08:22:28 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:25:55.182 08:22:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:55.182 08:22:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:25:55.182 08:22:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:55.182 08:22:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:25:55.182 08:22:28 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:25:55.182 08:22:28 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:25:55.182 08:22:28 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:55.182 08:22:28 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:55.182 08:22:28 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:55.182 08:22:28 -- bdev/nbd_common.sh@51 -- # local i 00:25:55.182 08:22:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:55.182 08:22:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:25:55.442 08:22:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:55.442 08:22:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:55.442 08:22:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:55.442 08:22:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:55.442 08:22:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:55.442 08:22:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:55.442 08:22:28 -- bdev/nbd_common.sh@41 -- # break 00:25:55.442 08:22:28 -- bdev/nbd_common.sh@45 -- # return 0 00:25:55.442 08:22:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:55.442 08:22:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:25:55.700 08:22:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:55.700 08:22:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:55.700 08:22:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:55.700 08:22:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:55.700 08:22:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:55.700 08:22:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:55.700 08:22:28 -- bdev/nbd_common.sh@41 -- # break 00:25:55.700 08:22:28 -- bdev/nbd_common.sh@45 -- # return 0 00:25:55.700 08:22:28 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:25:55.700 08:22:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:55.700 08:22:28 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:55.958 08:22:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:25:55.958 08:22:29 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:25:55.958 08:22:29 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:25:55.958 08:22:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:25:55.958 08:22:29 -- bdev/nbd_common.sh@65 -- # echo '' 00:25:55.958 08:22:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:25:55.958 08:22:29 -- bdev/nbd_common.sh@65 -- # true 00:25:55.958 08:22:29 -- bdev/nbd_common.sh@65 -- # count=0 00:25:55.958 08:22:29 -- bdev/nbd_common.sh@66 -- # echo 0 00:25:55.958 08:22:29 -- bdev/nbd_common.sh@104 -- # count=0 00:25:55.958 08:22:29 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:25:55.958 08:22:29 -- bdev/nbd_common.sh@109 -- # return 0 00:25:55.958 08:22:29 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:25:56.216 08:22:29 -- event/event.sh@35 -- # sleep 3 00:25:56.216 [2024-04-17 08:22:29.545656] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:56.474 [2024-04-17 08:22:29.646690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:56.474 [2024-04-17 08:22:29.646691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.474 [2024-04-17 08:22:29.690216] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:25:56.474 [2024-04-17 08:22:29.690275] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:25:59.775 08:22:32 -- event/event.sh@23 -- # for i in {0..2} 00:25:59.775 spdk_app_start Round 1 00:25:59.775 08:22:32 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:25:59.775 08:22:32 -- event/event.sh@25 -- # waitforlisten 57047 /var/tmp/spdk-nbd.sock 00:25:59.775 08:22:32 -- common/autotest_common.sh@819 -- # '[' -z 57047 ']' 00:25:59.775 08:22:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:25:59.775 08:22:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:59.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:25:59.775 08:22:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:25:59.775 08:22:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:59.775 08:22:32 -- common/autotest_common.sh@10 -- # set +x 00:25:59.775 08:22:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:59.775 08:22:32 -- common/autotest_common.sh@852 -- # return 0 00:25:59.776 08:22:32 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:25:59.776 Malloc0 00:25:59.776 08:22:32 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:25:59.776 Malloc1 00:25:59.776 08:22:33 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:25:59.776 08:22:33 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:59.776 08:22:33 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:25:59.776 08:22:33 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:25:59.776 08:22:33 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:59.776 08:22:33 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:25:59.776 08:22:33 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:25:59.776 08:22:33 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:59.776 08:22:33 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:25:59.776 08:22:33 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:59.776 08:22:33 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:59.776 08:22:33 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:59.776 08:22:33 -- bdev/nbd_common.sh@12 -- # local i 00:25:59.776 08:22:33 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:59.776 08:22:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:59.776 08:22:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:26:00.034 /dev/nbd0 00:26:00.034 08:22:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:00.034 08:22:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:00.034 08:22:33 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:26:00.034 08:22:33 -- common/autotest_common.sh@857 -- # local i 00:26:00.034 08:22:33 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:00.034 08:22:33 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:00.034 08:22:33 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:26:00.034 08:22:33 -- common/autotest_common.sh@861 -- # break 00:26:00.034 08:22:33 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:00.034 08:22:33 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:00.034 08:22:33 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:26:00.034 1+0 records in 00:26:00.034 1+0 records out 00:26:00.034 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000167693 s, 24.4 MB/s 00:26:00.034 08:22:33 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:00.034 08:22:33 -- common/autotest_common.sh@874 -- # size=4096 00:26:00.034 08:22:33 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:00.034 08:22:33 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:00.034 08:22:33 -- common/autotest_common.sh@877 -- # return 0 00:26:00.034 08:22:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:00.034 08:22:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:00.034 08:22:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:26:00.294 /dev/nbd1 00:26:00.294 08:22:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:00.294 08:22:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:00.294 08:22:33 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:26:00.294 08:22:33 -- common/autotest_common.sh@857 -- # local i 00:26:00.294 08:22:33 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:00.294 08:22:33 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:00.294 08:22:33 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:26:00.294 08:22:33 -- common/autotest_common.sh@861 -- # break 00:26:00.294 08:22:33 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:00.294 08:22:33 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:00.294 08:22:33 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:26:00.294 1+0 records in 00:26:00.294 1+0 records out 00:26:00.294 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384661 s, 10.6 MB/s 00:26:00.294 08:22:33 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:00.294 08:22:33 -- common/autotest_common.sh@874 -- # size=4096 00:26:00.294 08:22:33 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:00.294 08:22:33 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:00.294 08:22:33 -- common/autotest_common.sh@877 -- # return 0 00:26:00.294 08:22:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:00.294 08:22:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:00.294 08:22:33 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:00.294 08:22:33 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:00.294 08:22:33 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:00.553 08:22:33 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:26:00.553 { 00:26:00.553 "bdev_name": "Malloc0", 00:26:00.553 "nbd_device": "/dev/nbd0" 00:26:00.553 }, 00:26:00.553 { 00:26:00.553 "bdev_name": "Malloc1", 00:26:00.553 "nbd_device": "/dev/nbd1" 00:26:00.553 } 00:26:00.553 ]' 00:26:00.553 08:22:33 -- bdev/nbd_common.sh@64 -- # echo '[ 00:26:00.553 { 00:26:00.553 "bdev_name": "Malloc0", 00:26:00.553 "nbd_device": "/dev/nbd0" 00:26:00.553 }, 00:26:00.553 { 00:26:00.553 "bdev_name": "Malloc1", 00:26:00.553 "nbd_device": "/dev/nbd1" 00:26:00.553 } 00:26:00.553 ]' 00:26:00.553 08:22:33 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:00.811 08:22:33 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:26:00.811 /dev/nbd1' 00:26:00.811 08:22:33 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:26:00.811 /dev/nbd1' 00:26:00.811 08:22:33 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:00.811 08:22:33 -- bdev/nbd_common.sh@65 -- # count=2 00:26:00.811 08:22:33 -- bdev/nbd_common.sh@66 -- # echo 2 00:26:00.811 08:22:33 -- bdev/nbd_common.sh@95 -- # count=2 00:26:00.811 08:22:33 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:26:00.811 08:22:33 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:26:00.811 08:22:33 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:00.811 08:22:33 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:00.811 08:22:33 -- bdev/nbd_common.sh@71 -- # local operation=write 00:26:00.811 08:22:33 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:26:00.811 08:22:33 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:26:00.811 08:22:33 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:26:00.811 256+0 records in 00:26:00.811 256+0 records out 00:26:00.811 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120731 s, 86.9 MB/s 00:26:00.811 08:22:33 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:00.811 08:22:33 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:26:00.812 256+0 records in 00:26:00.812 256+0 records out 00:26:00.812 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240309 s, 43.6 MB/s 00:26:00.812 08:22:33 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:00.812 08:22:33 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:26:00.812 256+0 records in 00:26:00.812 256+0 records out 00:26:00.812 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020551 s, 51.0 MB/s 00:26:00.812 08:22:33 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:26:00.812 08:22:33 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:00.812 08:22:33 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:00.812 08:22:33 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:26:00.812 08:22:33 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:26:00.812 08:22:33 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:26:00.812 08:22:33 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:26:00.812 08:22:33 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:00.812 08:22:33 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:26:00.812 08:22:33 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:00.812 08:22:33 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:26:00.812 08:22:33 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:26:00.812 08:22:33 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:26:00.812 08:22:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:00.812 08:22:33 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:00.812 08:22:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:00.812 08:22:33 -- bdev/nbd_common.sh@51 -- # local i 00:26:00.812 08:22:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:00.812 08:22:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:01.070 08:22:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:01.070 08:22:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:01.070 08:22:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:01.070 08:22:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:01.070 08:22:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:01.070 08:22:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:01.070 08:22:34 -- bdev/nbd_common.sh@41 -- # break 00:26:01.070 08:22:34 -- bdev/nbd_common.sh@45 -- # return 0 00:26:01.070 08:22:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:01.070 08:22:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:26:01.329 08:22:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:01.329 08:22:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:01.329 08:22:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:01.329 08:22:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:01.329 08:22:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:01.329 08:22:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:01.329 08:22:34 -- bdev/nbd_common.sh@41 -- # break 00:26:01.329 08:22:34 -- bdev/nbd_common.sh@45 -- # return 0 00:26:01.329 08:22:34 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:01.329 08:22:34 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:01.329 08:22:34 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:01.329 08:22:34 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:01.588 08:22:34 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:01.588 08:22:34 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:01.588 08:22:34 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:01.588 08:22:34 -- bdev/nbd_common.sh@65 -- # echo '' 00:26:01.588 08:22:34 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:01.588 08:22:34 -- bdev/nbd_common.sh@65 -- # true 00:26:01.588 08:22:34 -- bdev/nbd_common.sh@65 -- # count=0 00:26:01.588 08:22:34 -- bdev/nbd_common.sh@66 -- # echo 0 00:26:01.588 08:22:34 -- bdev/nbd_common.sh@104 -- # count=0 00:26:01.588 08:22:34 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:26:01.588 08:22:34 -- bdev/nbd_common.sh@109 -- # return 0 00:26:01.588 08:22:34 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:26:01.846 08:22:34 -- event/event.sh@35 -- # sleep 3 00:26:01.846 [2024-04-17 08:22:35.141848] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:02.106 [2024-04-17 08:22:35.241212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:02.106 [2024-04-17 08:22:35.241213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.106 [2024-04-17 08:22:35.283501] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:26:02.106 [2024-04-17 08:22:35.283560] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:26:04.641 spdk_app_start Round 2 00:26:04.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:26:04.641 08:22:37 -- event/event.sh@23 -- # for i in {0..2} 00:26:04.641 08:22:37 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:26:04.641 08:22:37 -- event/event.sh@25 -- # waitforlisten 57047 /var/tmp/spdk-nbd.sock 00:26:04.641 08:22:37 -- common/autotest_common.sh@819 -- # '[' -z 57047 ']' 00:26:04.641 08:22:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:26:04.641 08:22:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:04.641 08:22:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:26:04.641 08:22:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:04.641 08:22:37 -- common/autotest_common.sh@10 -- # set +x 00:26:04.900 08:22:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:04.900 08:22:38 -- common/autotest_common.sh@852 -- # return 0 00:26:04.900 08:22:38 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:26:05.160 Malloc0 00:26:05.160 08:22:38 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:26:05.419 Malloc1 00:26:05.419 08:22:38 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:26:05.419 08:22:38 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:05.419 08:22:38 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:26:05.419 08:22:38 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:26:05.419 08:22:38 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:05.419 08:22:38 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:26:05.419 08:22:38 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:26:05.419 08:22:38 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:05.419 08:22:38 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:26:05.419 08:22:38 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:05.419 08:22:38 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:05.419 08:22:38 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:05.419 08:22:38 -- bdev/nbd_common.sh@12 -- # local i 00:26:05.419 08:22:38 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:05.419 08:22:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:05.419 08:22:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:26:05.678 /dev/nbd0 00:26:05.678 08:22:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:05.678 08:22:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:05.678 08:22:38 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:26:05.678 08:22:38 -- common/autotest_common.sh@857 -- # local i 00:26:05.678 08:22:38 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:05.678 08:22:38 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:05.678 08:22:38 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:26:05.678 08:22:38 -- common/autotest_common.sh@861 -- # break 00:26:05.678 08:22:38 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:05.678 08:22:38 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:05.678 08:22:38 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:26:05.678 1+0 records in 00:26:05.678 1+0 records out 00:26:05.678 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000487711 s, 8.4 MB/s 00:26:05.678 08:22:38 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:05.678 08:22:38 -- common/autotest_common.sh@874 -- # size=4096 00:26:05.678 08:22:38 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:05.678 08:22:38 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:05.678 08:22:38 -- common/autotest_common.sh@877 -- # return 0 00:26:05.678 08:22:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:05.678 08:22:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:05.678 08:22:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:26:05.937 /dev/nbd1 00:26:05.937 08:22:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:05.937 08:22:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:05.937 08:22:39 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:26:05.937 08:22:39 -- common/autotest_common.sh@857 -- # local i 00:26:05.937 08:22:39 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:05.937 08:22:39 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:05.937 08:22:39 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:26:05.937 08:22:39 -- common/autotest_common.sh@861 -- # break 00:26:05.937 08:22:39 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:05.937 08:22:39 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:05.937 08:22:39 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:26:05.937 1+0 records in 00:26:05.937 1+0 records out 00:26:05.937 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328989 s, 12.5 MB/s 00:26:05.937 08:22:39 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:05.937 08:22:39 -- common/autotest_common.sh@874 -- # size=4096 00:26:05.937 08:22:39 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:05.937 08:22:39 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:05.937 08:22:39 -- common/autotest_common.sh@877 -- # return 0 00:26:05.937 08:22:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:05.937 08:22:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:05.937 08:22:39 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:05.937 08:22:39 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:05.938 08:22:39 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:06.195 08:22:39 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:26:06.195 { 00:26:06.195 "bdev_name": "Malloc0", 00:26:06.195 "nbd_device": "/dev/nbd0" 00:26:06.195 }, 00:26:06.195 { 00:26:06.195 "bdev_name": "Malloc1", 00:26:06.195 "nbd_device": "/dev/nbd1" 00:26:06.195 } 00:26:06.195 ]' 00:26:06.195 08:22:39 -- bdev/nbd_common.sh@64 -- # echo '[ 00:26:06.195 { 00:26:06.195 "bdev_name": "Malloc0", 00:26:06.195 "nbd_device": "/dev/nbd0" 00:26:06.195 }, 00:26:06.195 { 00:26:06.195 "bdev_name": "Malloc1", 00:26:06.195 "nbd_device": "/dev/nbd1" 00:26:06.195 } 00:26:06.195 ]' 00:26:06.195 08:22:39 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:06.195 08:22:39 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:26:06.195 /dev/nbd1' 00:26:06.195 08:22:39 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:06.195 08:22:39 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:26:06.195 /dev/nbd1' 00:26:06.195 08:22:39 -- bdev/nbd_common.sh@65 -- # count=2 00:26:06.195 08:22:39 -- bdev/nbd_common.sh@66 -- # echo 2 00:26:06.195 08:22:39 -- bdev/nbd_common.sh@95 -- # count=2 00:26:06.195 08:22:39 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:26:06.195 08:22:39 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:26:06.195 08:22:39 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:06.195 08:22:39 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:06.195 08:22:39 -- bdev/nbd_common.sh@71 -- # local operation=write 00:26:06.195 08:22:39 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:26:06.195 08:22:39 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:26:06.195 08:22:39 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:26:06.195 256+0 records in 00:26:06.195 256+0 records out 00:26:06.195 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115337 s, 90.9 MB/s 00:26:06.196 08:22:39 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:06.196 08:22:39 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:26:06.196 256+0 records in 00:26:06.196 256+0 records out 00:26:06.196 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237049 s, 44.2 MB/s 00:26:06.196 08:22:39 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:06.196 08:22:39 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:26:06.196 256+0 records in 00:26:06.196 256+0 records out 00:26:06.196 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214002 s, 49.0 MB/s 00:26:06.196 08:22:39 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:26:06.196 08:22:39 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:06.196 08:22:39 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:06.196 08:22:39 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:26:06.196 08:22:39 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:26:06.196 08:22:39 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:26:06.196 08:22:39 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:26:06.196 08:22:39 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:06.196 08:22:39 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:26:06.196 08:22:39 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:06.196 08:22:39 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:26:06.196 08:22:39 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:26:06.196 08:22:39 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:26:06.196 08:22:39 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:06.196 08:22:39 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:06.196 08:22:39 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:06.196 08:22:39 -- bdev/nbd_common.sh@51 -- # local i 00:26:06.196 08:22:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:06.196 08:22:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:06.454 08:22:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:06.454 08:22:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:06.454 08:22:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:06.454 08:22:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:06.454 08:22:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:06.454 08:22:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:06.454 08:22:39 -- bdev/nbd_common.sh@41 -- # break 00:26:06.454 08:22:39 -- bdev/nbd_common.sh@45 -- # return 0 00:26:06.454 08:22:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:06.454 08:22:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:26:06.713 08:22:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:06.713 08:22:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:06.713 08:22:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:06.713 08:22:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:06.713 08:22:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:06.713 08:22:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:06.713 08:22:39 -- bdev/nbd_common.sh@41 -- # break 00:26:06.713 08:22:39 -- bdev/nbd_common.sh@45 -- # return 0 00:26:06.713 08:22:39 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:06.713 08:22:39 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:06.713 08:22:39 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:06.972 08:22:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:06.972 08:22:40 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:06.972 08:22:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:06.972 08:22:40 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:06.972 08:22:40 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:06.972 08:22:40 -- bdev/nbd_common.sh@65 -- # echo '' 00:26:06.972 08:22:40 -- bdev/nbd_common.sh@65 -- # true 00:26:06.972 08:22:40 -- bdev/nbd_common.sh@65 -- # count=0 00:26:06.972 08:22:40 -- bdev/nbd_common.sh@66 -- # echo 0 00:26:06.972 08:22:40 -- bdev/nbd_common.sh@104 -- # count=0 00:26:06.972 08:22:40 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:26:06.972 08:22:40 -- bdev/nbd_common.sh@109 -- # return 0 00:26:06.972 08:22:40 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:26:07.230 08:22:40 -- event/event.sh@35 -- # sleep 3 00:26:07.488 [2024-04-17 08:22:40.714041] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:07.488 [2024-04-17 08:22:40.814916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:07.488 [2024-04-17 08:22:40.814919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:07.746 [2024-04-17 08:22:40.858262] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:26:07.747 [2024-04-17 08:22:40.858319] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:26:10.280 08:22:43 -- event/event.sh@38 -- # waitforlisten 57047 /var/tmp/spdk-nbd.sock 00:26:10.280 08:22:43 -- common/autotest_common.sh@819 -- # '[' -z 57047 ']' 00:26:10.280 08:22:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:26:10.280 08:22:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:10.280 08:22:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:26:10.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:26:10.280 08:22:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:10.280 08:22:43 -- common/autotest_common.sh@10 -- # set +x 00:26:10.540 08:22:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:10.540 08:22:43 -- common/autotest_common.sh@852 -- # return 0 00:26:10.540 08:22:43 -- event/event.sh@39 -- # killprocess 57047 00:26:10.540 08:22:43 -- common/autotest_common.sh@926 -- # '[' -z 57047 ']' 00:26:10.540 08:22:43 -- common/autotest_common.sh@930 -- # kill -0 57047 00:26:10.540 08:22:43 -- common/autotest_common.sh@931 -- # uname 00:26:10.540 08:22:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:10.540 08:22:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57047 00:26:10.540 killing process with pid 57047 00:26:10.540 08:22:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:10.540 08:22:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:10.540 08:22:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57047' 00:26:10.540 08:22:43 -- common/autotest_common.sh@945 -- # kill 57047 00:26:10.540 08:22:43 -- common/autotest_common.sh@950 -- # wait 57047 00:26:10.799 spdk_app_start is called in Round 0. 00:26:10.799 Shutdown signal received, stop current app iteration 00:26:10.799 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:26:10.799 spdk_app_start is called in Round 1. 00:26:10.799 Shutdown signal received, stop current app iteration 00:26:10.799 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:26:10.799 spdk_app_start is called in Round 2. 00:26:10.799 Shutdown signal received, stop current app iteration 00:26:10.799 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:26:10.799 spdk_app_start is called in Round 3. 00:26:10.799 Shutdown signal received, stop current app iteration 00:26:10.799 08:22:44 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:26:10.799 08:22:44 -- event/event.sh@42 -- # return 0 00:26:10.799 00:26:10.799 real 0m18.124s 00:26:10.799 user 0m39.982s 00:26:10.799 sys 0m2.906s 00:26:10.799 08:22:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:10.799 08:22:44 -- common/autotest_common.sh@10 -- # set +x 00:26:10.799 ************************************ 00:26:10.799 END TEST app_repeat 00:26:10.799 ************************************ 00:26:10.799 08:22:44 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:26:10.799 08:22:44 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:26:10.799 08:22:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:10.799 08:22:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:10.799 08:22:44 -- common/autotest_common.sh@10 -- # set +x 00:26:10.799 ************************************ 00:26:10.799 START TEST cpu_locks 00:26:10.799 ************************************ 00:26:10.799 08:22:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:26:11.070 * Looking for test storage... 00:26:11.070 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:26:11.070 08:22:44 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:26:11.070 08:22:44 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:26:11.070 08:22:44 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:26:11.070 08:22:44 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:26:11.070 08:22:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:11.070 08:22:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:11.070 08:22:44 -- common/autotest_common.sh@10 -- # set +x 00:26:11.070 ************************************ 00:26:11.070 START TEST default_locks 00:26:11.070 ************************************ 00:26:11.070 08:22:44 -- common/autotest_common.sh@1104 -- # default_locks 00:26:11.070 08:22:44 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:26:11.070 08:22:44 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=57660 00:26:11.070 08:22:44 -- event/cpu_locks.sh@47 -- # waitforlisten 57660 00:26:11.070 08:22:44 -- common/autotest_common.sh@819 -- # '[' -z 57660 ']' 00:26:11.070 08:22:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:11.070 08:22:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:11.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:11.070 08:22:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:11.070 08:22:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:11.070 08:22:44 -- common/autotest_common.sh@10 -- # set +x 00:26:11.070 [2024-04-17 08:22:44.251306] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:11.070 [2024-04-17 08:22:44.251382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57660 ] 00:26:11.070 [2024-04-17 08:22:44.374677] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.332 [2024-04-17 08:22:44.469014] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:11.332 [2024-04-17 08:22:44.469166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.898 08:22:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:11.898 08:22:45 -- common/autotest_common.sh@852 -- # return 0 00:26:11.898 08:22:45 -- event/cpu_locks.sh@49 -- # locks_exist 57660 00:26:11.898 08:22:45 -- event/cpu_locks.sh@22 -- # lslocks -p 57660 00:26:11.898 08:22:45 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:26:12.539 08:22:45 -- event/cpu_locks.sh@50 -- # killprocess 57660 00:26:12.539 08:22:45 -- common/autotest_common.sh@926 -- # '[' -z 57660 ']' 00:26:12.539 08:22:45 -- common/autotest_common.sh@930 -- # kill -0 57660 00:26:12.539 08:22:45 -- common/autotest_common.sh@931 -- # uname 00:26:12.539 08:22:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:12.539 08:22:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57660 00:26:12.540 08:22:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:12.540 08:22:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:12.540 08:22:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57660' 00:26:12.540 killing process with pid 57660 00:26:12.540 08:22:45 -- common/autotest_common.sh@945 -- # kill 57660 00:26:12.540 08:22:45 -- common/autotest_common.sh@950 -- # wait 57660 00:26:12.798 08:22:45 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 57660 00:26:12.798 08:22:45 -- common/autotest_common.sh@640 -- # local es=0 00:26:12.798 08:22:45 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 57660 00:26:12.798 08:22:45 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:26:12.798 08:22:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:12.798 08:22:45 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:26:12.798 08:22:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:12.798 08:22:45 -- common/autotest_common.sh@643 -- # waitforlisten 57660 00:26:12.798 08:22:45 -- common/autotest_common.sh@819 -- # '[' -z 57660 ']' 00:26:12.798 08:22:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:12.798 08:22:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:12.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:12.798 08:22:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:12.798 08:22:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:12.798 08:22:45 -- common/autotest_common.sh@10 -- # set +x 00:26:12.798 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (57660) - No such process 00:26:12.798 ERROR: process (pid: 57660) is no longer running 00:26:12.798 08:22:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:12.798 08:22:45 -- common/autotest_common.sh@852 -- # return 1 00:26:12.798 08:22:45 -- common/autotest_common.sh@643 -- # es=1 00:26:12.798 08:22:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:12.798 08:22:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:12.798 08:22:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:12.798 08:22:45 -- event/cpu_locks.sh@54 -- # no_locks 00:26:12.798 08:22:45 -- event/cpu_locks.sh@26 -- # lock_files=() 00:26:12.798 08:22:45 -- event/cpu_locks.sh@26 -- # local lock_files 00:26:12.798 08:22:45 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:26:12.798 00:26:12.798 real 0m1.706s 00:26:12.798 user 0m1.769s 00:26:12.798 sys 0m0.487s 00:26:12.798 08:22:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:12.798 08:22:45 -- common/autotest_common.sh@10 -- # set +x 00:26:12.798 ************************************ 00:26:12.798 END TEST default_locks 00:26:12.798 ************************************ 00:26:12.798 08:22:45 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:26:12.798 08:22:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:12.798 08:22:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:12.798 08:22:45 -- common/autotest_common.sh@10 -- # set +x 00:26:12.798 ************************************ 00:26:12.798 START TEST default_locks_via_rpc 00:26:12.798 ************************************ 00:26:12.798 08:22:45 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:26:12.798 08:22:45 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=57724 00:26:12.798 08:22:45 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:26:12.798 08:22:45 -- event/cpu_locks.sh@63 -- # waitforlisten 57724 00:26:12.798 08:22:45 -- common/autotest_common.sh@819 -- # '[' -z 57724 ']' 00:26:12.798 08:22:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:12.798 08:22:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:12.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:12.798 08:22:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:12.798 08:22:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:12.798 08:22:45 -- common/autotest_common.sh@10 -- # set +x 00:26:12.798 [2024-04-17 08:22:46.031445] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:12.798 [2024-04-17 08:22:46.031564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57724 ] 00:26:13.058 [2024-04-17 08:22:46.159501] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.058 [2024-04-17 08:22:46.263144] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:13.058 [2024-04-17 08:22:46.263295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.624 08:22:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:13.624 08:22:46 -- common/autotest_common.sh@852 -- # return 0 00:26:13.624 08:22:46 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:26:13.624 08:22:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:13.624 08:22:46 -- common/autotest_common.sh@10 -- # set +x 00:26:13.624 08:22:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:13.624 08:22:46 -- event/cpu_locks.sh@67 -- # no_locks 00:26:13.624 08:22:46 -- event/cpu_locks.sh@26 -- # lock_files=() 00:26:13.624 08:22:46 -- event/cpu_locks.sh@26 -- # local lock_files 00:26:13.624 08:22:46 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:26:13.624 08:22:46 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:26:13.624 08:22:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:13.624 08:22:46 -- common/autotest_common.sh@10 -- # set +x 00:26:13.624 08:22:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:13.624 08:22:46 -- event/cpu_locks.sh@71 -- # locks_exist 57724 00:26:13.624 08:22:46 -- event/cpu_locks.sh@22 -- # lslocks -p 57724 00:26:13.624 08:22:46 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:26:14.191 08:22:47 -- event/cpu_locks.sh@73 -- # killprocess 57724 00:26:14.191 08:22:47 -- common/autotest_common.sh@926 -- # '[' -z 57724 ']' 00:26:14.191 08:22:47 -- common/autotest_common.sh@930 -- # kill -0 57724 00:26:14.191 08:22:47 -- common/autotest_common.sh@931 -- # uname 00:26:14.191 08:22:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:14.191 08:22:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57724 00:26:14.191 08:22:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:14.191 08:22:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:14.191 killing process with pid 57724 00:26:14.191 08:22:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57724' 00:26:14.191 08:22:47 -- common/autotest_common.sh@945 -- # kill 57724 00:26:14.191 08:22:47 -- common/autotest_common.sh@950 -- # wait 57724 00:26:14.449 00:26:14.449 real 0m1.743s 00:26:14.449 user 0m1.809s 00:26:14.449 sys 0m0.507s 00:26:14.449 08:22:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:14.449 08:22:47 -- common/autotest_common.sh@10 -- # set +x 00:26:14.449 ************************************ 00:26:14.449 END TEST default_locks_via_rpc 00:26:14.449 ************************************ 00:26:14.449 08:22:47 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:26:14.449 08:22:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:14.449 08:22:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:14.449 08:22:47 -- common/autotest_common.sh@10 -- # set +x 00:26:14.449 ************************************ 00:26:14.449 START TEST non_locking_app_on_locked_coremask 00:26:14.450 ************************************ 00:26:14.450 08:22:47 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:26:14.708 08:22:47 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=57792 00:26:14.708 08:22:47 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:26:14.708 08:22:47 -- event/cpu_locks.sh@81 -- # waitforlisten 57792 /var/tmp/spdk.sock 00:26:14.708 08:22:47 -- common/autotest_common.sh@819 -- # '[' -z 57792 ']' 00:26:14.708 08:22:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:14.708 08:22:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:14.708 08:22:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:14.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:14.708 08:22:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:14.708 08:22:47 -- common/autotest_common.sh@10 -- # set +x 00:26:14.708 [2024-04-17 08:22:47.838428] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:14.708 [2024-04-17 08:22:47.838510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57792 ] 00:26:14.708 [2024-04-17 08:22:47.977825] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.966 [2024-04-17 08:22:48.072925] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:14.966 [2024-04-17 08:22:48.073071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:15.534 08:22:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:15.534 08:22:48 -- common/autotest_common.sh@852 -- # return 0 00:26:15.534 08:22:48 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=57816 00:26:15.534 08:22:48 -- event/cpu_locks.sh@85 -- # waitforlisten 57816 /var/tmp/spdk2.sock 00:26:15.534 08:22:48 -- common/autotest_common.sh@819 -- # '[' -z 57816 ']' 00:26:15.534 08:22:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:26:15.534 08:22:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:15.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:26:15.534 08:22:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:26:15.534 08:22:48 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:26:15.534 08:22:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:15.534 08:22:48 -- common/autotest_common.sh@10 -- # set +x 00:26:15.534 [2024-04-17 08:22:48.758286] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:15.534 [2024-04-17 08:22:48.758373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57816 ] 00:26:15.797 [2024-04-17 08:22:48.888322] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:26:15.797 [2024-04-17 08:22:48.888359] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.797 [2024-04-17 08:22:49.095935] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:15.797 [2024-04-17 08:22:49.096103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.364 08:22:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:16.364 08:22:49 -- common/autotest_common.sh@852 -- # return 0 00:26:16.364 08:22:49 -- event/cpu_locks.sh@87 -- # locks_exist 57792 00:26:16.364 08:22:49 -- event/cpu_locks.sh@22 -- # lslocks -p 57792 00:26:16.364 08:22:49 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:26:16.930 08:22:50 -- event/cpu_locks.sh@89 -- # killprocess 57792 00:26:16.930 08:22:50 -- common/autotest_common.sh@926 -- # '[' -z 57792 ']' 00:26:16.930 08:22:50 -- common/autotest_common.sh@930 -- # kill -0 57792 00:26:16.930 08:22:50 -- common/autotest_common.sh@931 -- # uname 00:26:16.930 08:22:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:16.930 08:22:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57792 00:26:16.930 08:22:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:16.930 08:22:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:16.930 killing process with pid 57792 00:26:16.930 08:22:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57792' 00:26:16.930 08:22:50 -- common/autotest_common.sh@945 -- # kill 57792 00:26:16.930 08:22:50 -- common/autotest_common.sh@950 -- # wait 57792 00:26:17.496 08:22:50 -- event/cpu_locks.sh@90 -- # killprocess 57816 00:26:17.496 08:22:50 -- common/autotest_common.sh@926 -- # '[' -z 57816 ']' 00:26:17.496 08:22:50 -- common/autotest_common.sh@930 -- # kill -0 57816 00:26:17.496 08:22:50 -- common/autotest_common.sh@931 -- # uname 00:26:17.496 08:22:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:17.496 08:22:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57816 00:26:17.496 08:22:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:17.496 08:22:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:17.496 killing process with pid 57816 00:26:17.496 08:22:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57816' 00:26:17.496 08:22:50 -- common/autotest_common.sh@945 -- # kill 57816 00:26:17.496 08:22:50 -- common/autotest_common.sh@950 -- # wait 57816 00:26:18.063 00:26:18.063 real 0m3.388s 00:26:18.063 user 0m3.695s 00:26:18.063 sys 0m0.874s 00:26:18.063 08:22:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:18.063 08:22:51 -- common/autotest_common.sh@10 -- # set +x 00:26:18.063 ************************************ 00:26:18.063 END TEST non_locking_app_on_locked_coremask 00:26:18.063 ************************************ 00:26:18.063 08:22:51 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:26:18.063 08:22:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:18.063 08:22:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:18.063 08:22:51 -- common/autotest_common.sh@10 -- # set +x 00:26:18.063 ************************************ 00:26:18.063 START TEST locking_app_on_unlocked_coremask 00:26:18.063 ************************************ 00:26:18.063 08:22:51 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:26:18.063 08:22:51 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=57890 00:26:18.063 08:22:51 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:26:18.063 08:22:51 -- event/cpu_locks.sh@99 -- # waitforlisten 57890 /var/tmp/spdk.sock 00:26:18.063 08:22:51 -- common/autotest_common.sh@819 -- # '[' -z 57890 ']' 00:26:18.063 08:22:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:18.063 08:22:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:18.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:18.063 08:22:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:18.063 08:22:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:18.063 08:22:51 -- common/autotest_common.sh@10 -- # set +x 00:26:18.063 [2024-04-17 08:22:51.288596] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:18.063 [2024-04-17 08:22:51.288672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57890 ] 00:26:18.321 [2024-04-17 08:22:51.427275] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:26:18.321 [2024-04-17 08:22:51.427335] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.321 [2024-04-17 08:22:51.531930] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:18.321 [2024-04-17 08:22:51.532074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.255 08:22:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:19.255 08:22:52 -- common/autotest_common.sh@852 -- # return 0 00:26:19.255 08:22:52 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=57917 00:26:19.255 08:22:52 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:26:19.255 08:22:52 -- event/cpu_locks.sh@103 -- # waitforlisten 57917 /var/tmp/spdk2.sock 00:26:19.255 08:22:52 -- common/autotest_common.sh@819 -- # '[' -z 57917 ']' 00:26:19.255 08:22:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:26:19.255 08:22:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:19.255 08:22:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:26:19.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:26:19.255 08:22:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:19.255 08:22:52 -- common/autotest_common.sh@10 -- # set +x 00:26:19.255 [2024-04-17 08:22:52.309532] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:19.255 [2024-04-17 08:22:52.309957] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57917 ] 00:26:19.255 [2024-04-17 08:22:52.443866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.513 [2024-04-17 08:22:52.648301] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:19.513 [2024-04-17 08:22:52.648468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.080 08:22:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:20.080 08:22:53 -- common/autotest_common.sh@852 -- # return 0 00:26:20.080 08:22:53 -- event/cpu_locks.sh@105 -- # locks_exist 57917 00:26:20.080 08:22:53 -- event/cpu_locks.sh@22 -- # lslocks -p 57917 00:26:20.080 08:22:53 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:26:20.337 08:22:53 -- event/cpu_locks.sh@107 -- # killprocess 57890 00:26:20.337 08:22:53 -- common/autotest_common.sh@926 -- # '[' -z 57890 ']' 00:26:20.338 08:22:53 -- common/autotest_common.sh@930 -- # kill -0 57890 00:26:20.338 08:22:53 -- common/autotest_common.sh@931 -- # uname 00:26:20.338 08:22:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:20.338 08:22:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57890 00:26:20.338 08:22:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:20.338 08:22:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:20.338 killing process with pid 57890 00:26:20.338 08:22:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57890' 00:26:20.338 08:22:53 -- common/autotest_common.sh@945 -- # kill 57890 00:26:20.338 08:22:53 -- common/autotest_common.sh@950 -- # wait 57890 00:26:21.271 08:22:54 -- event/cpu_locks.sh@108 -- # killprocess 57917 00:26:21.271 08:22:54 -- common/autotest_common.sh@926 -- # '[' -z 57917 ']' 00:26:21.271 08:22:54 -- common/autotest_common.sh@930 -- # kill -0 57917 00:26:21.271 08:22:54 -- common/autotest_common.sh@931 -- # uname 00:26:21.271 08:22:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:21.271 08:22:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57917 00:26:21.271 08:22:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:21.271 08:22:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:21.271 killing process with pid 57917 00:26:21.271 08:22:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57917' 00:26:21.271 08:22:54 -- common/autotest_common.sh@945 -- # kill 57917 00:26:21.271 08:22:54 -- common/autotest_common.sh@950 -- # wait 57917 00:26:21.529 00:26:21.529 real 0m3.493s 00:26:21.529 user 0m3.863s 00:26:21.529 sys 0m0.878s 00:26:21.529 08:22:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:21.529 08:22:54 -- common/autotest_common.sh@10 -- # set +x 00:26:21.529 ************************************ 00:26:21.529 END TEST locking_app_on_unlocked_coremask 00:26:21.529 ************************************ 00:26:21.529 08:22:54 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:26:21.529 08:22:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:21.529 08:22:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:21.529 08:22:54 -- common/autotest_common.sh@10 -- # set +x 00:26:21.529 ************************************ 00:26:21.529 START TEST locking_app_on_locked_coremask 00:26:21.529 ************************************ 00:26:21.529 08:22:54 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:26:21.529 08:22:54 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=57991 00:26:21.529 08:22:54 -- event/cpu_locks.sh@116 -- # waitforlisten 57991 /var/tmp/spdk.sock 00:26:21.529 08:22:54 -- common/autotest_common.sh@819 -- # '[' -z 57991 ']' 00:26:21.529 08:22:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:21.529 08:22:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:21.529 08:22:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:21.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:21.529 08:22:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:21.529 08:22:54 -- common/autotest_common.sh@10 -- # set +x 00:26:21.529 08:22:54 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:26:21.529 [2024-04-17 08:22:54.840496] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:21.529 [2024-04-17 08:22:54.840578] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57991 ] 00:26:21.788 [2024-04-17 08:22:54.979179] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.788 [2024-04-17 08:22:55.083636] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:21.788 [2024-04-17 08:22:55.083784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.724 08:22:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:22.724 08:22:55 -- common/autotest_common.sh@852 -- # return 0 00:26:22.724 08:22:55 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58019 00:26:22.724 08:22:55 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58019 /var/tmp/spdk2.sock 00:26:22.724 08:22:55 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:26:22.724 08:22:55 -- common/autotest_common.sh@640 -- # local es=0 00:26:22.724 08:22:55 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 58019 /var/tmp/spdk2.sock 00:26:22.724 08:22:55 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:26:22.724 08:22:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:22.724 08:22:55 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:26:22.724 08:22:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:22.724 08:22:55 -- common/autotest_common.sh@643 -- # waitforlisten 58019 /var/tmp/spdk2.sock 00:26:22.724 08:22:55 -- common/autotest_common.sh@819 -- # '[' -z 58019 ']' 00:26:22.724 08:22:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:26:22.724 08:22:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:22.724 08:22:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:26:22.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:26:22.724 08:22:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:22.724 08:22:55 -- common/autotest_common.sh@10 -- # set +x 00:26:22.724 [2024-04-17 08:22:55.774275] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:22.724 [2024-04-17 08:22:55.774345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58019 ] 00:26:22.724 [2024-04-17 08:22:55.904857] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 57991 has claimed it. 00:26:22.724 [2024-04-17 08:22:55.904922] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:26:23.290 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (58019) - No such process 00:26:23.290 ERROR: process (pid: 58019) is no longer running 00:26:23.290 08:22:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:23.290 08:22:56 -- common/autotest_common.sh@852 -- # return 1 00:26:23.290 08:22:56 -- common/autotest_common.sh@643 -- # es=1 00:26:23.290 08:22:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:23.290 08:22:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:23.290 08:22:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:23.290 08:22:56 -- event/cpu_locks.sh@122 -- # locks_exist 57991 00:26:23.290 08:22:56 -- event/cpu_locks.sh@22 -- # lslocks -p 57991 00:26:23.290 08:22:56 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:26:23.612 08:22:56 -- event/cpu_locks.sh@124 -- # killprocess 57991 00:26:23.612 08:22:56 -- common/autotest_common.sh@926 -- # '[' -z 57991 ']' 00:26:23.612 08:22:56 -- common/autotest_common.sh@930 -- # kill -0 57991 00:26:23.612 08:22:56 -- common/autotest_common.sh@931 -- # uname 00:26:23.612 08:22:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:23.613 08:22:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57991 00:26:23.613 08:22:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:23.613 08:22:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:23.613 killing process with pid 57991 00:26:23.613 08:22:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57991' 00:26:23.613 08:22:56 -- common/autotest_common.sh@945 -- # kill 57991 00:26:23.613 08:22:56 -- common/autotest_common.sh@950 -- # wait 57991 00:26:24.180 00:26:24.180 real 0m2.473s 00:26:24.180 user 0m2.754s 00:26:24.180 sys 0m0.626s 00:26:24.180 08:22:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:24.180 ************************************ 00:26:24.180 END TEST locking_app_on_locked_coremask 00:26:24.180 ************************************ 00:26:24.180 08:22:57 -- common/autotest_common.sh@10 -- # set +x 00:26:24.180 08:22:57 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:26:24.180 08:22:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:24.180 08:22:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:24.180 08:22:57 -- common/autotest_common.sh@10 -- # set +x 00:26:24.180 ************************************ 00:26:24.180 START TEST locking_overlapped_coremask 00:26:24.180 ************************************ 00:26:24.180 08:22:57 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:26:24.180 08:22:57 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58070 00:26:24.180 08:22:57 -- event/cpu_locks.sh@133 -- # waitforlisten 58070 /var/tmp/spdk.sock 00:26:24.180 08:22:57 -- common/autotest_common.sh@819 -- # '[' -z 58070 ']' 00:26:24.180 08:22:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:24.180 08:22:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:24.180 08:22:57 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:26:24.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:24.180 08:22:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:24.180 08:22:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:24.180 08:22:57 -- common/autotest_common.sh@10 -- # set +x 00:26:24.180 [2024-04-17 08:22:57.358916] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:24.180 [2024-04-17 08:22:57.358989] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58070 ] 00:26:24.180 [2024-04-17 08:22:57.485256] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:24.439 [2024-04-17 08:22:57.587741] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:24.439 [2024-04-17 08:22:57.587979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:24.439 [2024-04-17 08:22:57.588181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:24.439 [2024-04-17 08:22:57.588215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.007 08:22:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:25.007 08:22:58 -- common/autotest_common.sh@852 -- # return 0 00:26:25.007 08:22:58 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58099 00:26:25.007 08:22:58 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58099 /var/tmp/spdk2.sock 00:26:25.007 08:22:58 -- common/autotest_common.sh@640 -- # local es=0 00:26:25.007 08:22:58 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 58099 /var/tmp/spdk2.sock 00:26:25.007 08:22:58 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:26:25.007 08:22:58 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:26:25.007 08:22:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:25.007 08:22:58 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:26:25.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:26:25.007 08:22:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:25.007 08:22:58 -- common/autotest_common.sh@643 -- # waitforlisten 58099 /var/tmp/spdk2.sock 00:26:25.007 08:22:58 -- common/autotest_common.sh@819 -- # '[' -z 58099 ']' 00:26:25.007 08:22:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:26:25.007 08:22:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:25.007 08:22:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:26:25.007 08:22:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:25.007 08:22:58 -- common/autotest_common.sh@10 -- # set +x 00:26:25.266 [2024-04-17 08:22:58.362191] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:25.266 [2024-04-17 08:22:58.362270] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58099 ] 00:26:25.266 [2024-04-17 08:22:58.495681] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58070 has claimed it. 00:26:25.266 [2024-04-17 08:22:58.495753] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:26:25.834 ERROR: process (pid: 58099) is no longer running 00:26:25.834 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (58099) - No such process 00:26:25.834 08:22:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:25.834 08:22:59 -- common/autotest_common.sh@852 -- # return 1 00:26:25.834 08:22:59 -- common/autotest_common.sh@643 -- # es=1 00:26:25.834 08:22:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:25.834 08:22:59 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:25.834 08:22:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:25.834 08:22:59 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:26:25.834 08:22:59 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:26:25.834 08:22:59 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:26:25.834 08:22:59 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:26:25.834 08:22:59 -- event/cpu_locks.sh@141 -- # killprocess 58070 00:26:25.834 08:22:59 -- common/autotest_common.sh@926 -- # '[' -z 58070 ']' 00:26:25.834 08:22:59 -- common/autotest_common.sh@930 -- # kill -0 58070 00:26:25.834 08:22:59 -- common/autotest_common.sh@931 -- # uname 00:26:25.834 08:22:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:25.834 08:22:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58070 00:26:25.834 killing process with pid 58070 00:26:25.834 08:22:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:25.834 08:22:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:25.834 08:22:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58070' 00:26:25.834 08:22:59 -- common/autotest_common.sh@945 -- # kill 58070 00:26:25.834 08:22:59 -- common/autotest_common.sh@950 -- # wait 58070 00:26:26.402 00:26:26.402 real 0m2.125s 00:26:26.402 user 0m5.852s 00:26:26.402 sys 0m0.381s 00:26:26.402 08:22:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:26.402 08:22:59 -- common/autotest_common.sh@10 -- # set +x 00:26:26.402 ************************************ 00:26:26.402 END TEST locking_overlapped_coremask 00:26:26.402 ************************************ 00:26:26.402 08:22:59 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:26:26.402 08:22:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:26.402 08:22:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:26.402 08:22:59 -- common/autotest_common.sh@10 -- # set +x 00:26:26.402 ************************************ 00:26:26.402 START TEST locking_overlapped_coremask_via_rpc 00:26:26.402 ************************************ 00:26:26.402 08:22:59 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:26:26.402 08:22:59 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58146 00:26:26.402 08:22:59 -- event/cpu_locks.sh@149 -- # waitforlisten 58146 /var/tmp/spdk.sock 00:26:26.402 08:22:59 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:26:26.402 08:22:59 -- common/autotest_common.sh@819 -- # '[' -z 58146 ']' 00:26:26.402 08:22:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:26.402 08:22:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:26.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:26.402 08:22:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:26.402 08:22:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:26.402 08:22:59 -- common/autotest_common.sh@10 -- # set +x 00:26:26.402 [2024-04-17 08:22:59.556145] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:26.402 [2024-04-17 08:22:59.556230] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58146 ] 00:26:26.402 [2024-04-17 08:22:59.696373] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:26:26.402 [2024-04-17 08:22:59.696478] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:26.661 [2024-04-17 08:22:59.800415] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:26.661 [2024-04-17 08:22:59.800786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:26.661 [2024-04-17 08:22:59.800851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:26.661 [2024-04-17 08:22:59.800853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:27.228 08:23:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:27.228 08:23:00 -- common/autotest_common.sh@852 -- # return 0 00:26:27.228 08:23:00 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58176 00:26:27.228 08:23:00 -- event/cpu_locks.sh@153 -- # waitforlisten 58176 /var/tmp/spdk2.sock 00:26:27.228 08:23:00 -- common/autotest_common.sh@819 -- # '[' -z 58176 ']' 00:26:27.228 08:23:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:26:27.228 08:23:00 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:26:27.228 08:23:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:27.228 08:23:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:26:27.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:26:27.228 08:23:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:27.228 08:23:00 -- common/autotest_common.sh@10 -- # set +x 00:26:27.228 [2024-04-17 08:23:00.529582] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:27.228 [2024-04-17 08:23:00.529659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58176 ] 00:26:27.487 [2024-04-17 08:23:00.667269] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:26:27.487 [2024-04-17 08:23:00.667431] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:27.747 [2024-04-17 08:23:00.881034] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:27.747 [2024-04-17 08:23:00.881698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:27.747 [2024-04-17 08:23:00.881877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:27.747 [2024-04-17 08:23:00.881904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:28.312 08:23:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:28.312 08:23:01 -- common/autotest_common.sh@852 -- # return 0 00:26:28.312 08:23:01 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:26:28.312 08:23:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:28.312 08:23:01 -- common/autotest_common.sh@10 -- # set +x 00:26:28.312 08:23:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:28.312 08:23:01 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:26:28.312 08:23:01 -- common/autotest_common.sh@640 -- # local es=0 00:26:28.312 08:23:01 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:26:28.312 08:23:01 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:26:28.312 08:23:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:28.312 08:23:01 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:26:28.312 08:23:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:28.312 08:23:01 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:26:28.312 08:23:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:28.312 08:23:01 -- common/autotest_common.sh@10 -- # set +x 00:26:28.312 [2024-04-17 08:23:01.509586] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58146 has claimed it. 00:26:28.312 request: 00:26:28.312 2024/04/17 08:23:01 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:26:28.312 { 00:26:28.312 "method": "framework_enable_cpumask_locks", 00:26:28.312 "params": {} 00:26:28.312 } 00:26:28.312 Got JSON-RPC error response 00:26:28.312 GoRPCClient: error on JSON-RPC call 00:26:28.312 08:23:01 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:26:28.312 08:23:01 -- common/autotest_common.sh@643 -- # es=1 00:26:28.312 08:23:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:28.312 08:23:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:28.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:28.312 08:23:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:28.312 08:23:01 -- event/cpu_locks.sh@158 -- # waitforlisten 58146 /var/tmp/spdk.sock 00:26:28.312 08:23:01 -- common/autotest_common.sh@819 -- # '[' -z 58146 ']' 00:26:28.312 08:23:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.312 08:23:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:28.312 08:23:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:28.312 08:23:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:28.312 08:23:01 -- common/autotest_common.sh@10 -- # set +x 00:26:28.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:26:28.570 08:23:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:28.570 08:23:01 -- common/autotest_common.sh@852 -- # return 0 00:26:28.570 08:23:01 -- event/cpu_locks.sh@159 -- # waitforlisten 58176 /var/tmp/spdk2.sock 00:26:28.570 08:23:01 -- common/autotest_common.sh@819 -- # '[' -z 58176 ']' 00:26:28.570 08:23:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:26:28.570 08:23:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:28.570 08:23:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:26:28.570 08:23:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:28.570 08:23:01 -- common/autotest_common.sh@10 -- # set +x 00:26:28.828 ************************************ 00:26:28.828 END TEST locking_overlapped_coremask_via_rpc 00:26:28.828 ************************************ 00:26:28.828 08:23:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:28.828 08:23:01 -- common/autotest_common.sh@852 -- # return 0 00:26:28.828 08:23:01 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:26:28.828 08:23:01 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:26:28.828 08:23:01 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:26:28.828 08:23:01 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:26:28.828 00:26:28.828 real 0m2.498s 00:26:28.828 user 0m1.211s 00:26:28.828 sys 0m0.223s 00:26:28.828 08:23:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:28.828 08:23:01 -- common/autotest_common.sh@10 -- # set +x 00:26:28.828 08:23:02 -- event/cpu_locks.sh@174 -- # cleanup 00:26:28.828 08:23:02 -- event/cpu_locks.sh@15 -- # [[ -z 58146 ]] 00:26:28.828 08:23:02 -- event/cpu_locks.sh@15 -- # killprocess 58146 00:26:28.829 08:23:02 -- common/autotest_common.sh@926 -- # '[' -z 58146 ']' 00:26:28.829 08:23:02 -- common/autotest_common.sh@930 -- # kill -0 58146 00:26:28.829 08:23:02 -- common/autotest_common.sh@931 -- # uname 00:26:28.829 08:23:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:28.829 08:23:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58146 00:26:28.829 killing process with pid 58146 00:26:28.829 08:23:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:28.829 08:23:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:28.829 08:23:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58146' 00:26:28.829 08:23:02 -- common/autotest_common.sh@945 -- # kill 58146 00:26:28.829 08:23:02 -- common/autotest_common.sh@950 -- # wait 58146 00:26:29.395 08:23:02 -- event/cpu_locks.sh@16 -- # [[ -z 58176 ]] 00:26:29.395 08:23:02 -- event/cpu_locks.sh@16 -- # killprocess 58176 00:26:29.395 08:23:02 -- common/autotest_common.sh@926 -- # '[' -z 58176 ']' 00:26:29.395 08:23:02 -- common/autotest_common.sh@930 -- # kill -0 58176 00:26:29.395 08:23:02 -- common/autotest_common.sh@931 -- # uname 00:26:29.395 08:23:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:29.395 08:23:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58176 00:26:29.395 killing process with pid 58176 00:26:29.395 08:23:02 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:26:29.395 08:23:02 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:26:29.395 08:23:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58176' 00:26:29.395 08:23:02 -- common/autotest_common.sh@945 -- # kill 58176 00:26:29.395 08:23:02 -- common/autotest_common.sh@950 -- # wait 58176 00:26:29.654 08:23:02 -- event/cpu_locks.sh@18 -- # rm -f 00:26:29.654 Process with pid 58146 is not found 00:26:29.654 08:23:02 -- event/cpu_locks.sh@1 -- # cleanup 00:26:29.654 08:23:02 -- event/cpu_locks.sh@15 -- # [[ -z 58146 ]] 00:26:29.654 08:23:02 -- event/cpu_locks.sh@15 -- # killprocess 58146 00:26:29.654 08:23:02 -- common/autotest_common.sh@926 -- # '[' -z 58146 ']' 00:26:29.654 08:23:02 -- common/autotest_common.sh@930 -- # kill -0 58146 00:26:29.654 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (58146) - No such process 00:26:29.654 08:23:02 -- common/autotest_common.sh@953 -- # echo 'Process with pid 58146 is not found' 00:26:29.654 08:23:02 -- event/cpu_locks.sh@16 -- # [[ -z 58176 ]] 00:26:29.654 08:23:02 -- event/cpu_locks.sh@16 -- # killprocess 58176 00:26:29.654 08:23:02 -- common/autotest_common.sh@926 -- # '[' -z 58176 ']' 00:26:29.654 08:23:02 -- common/autotest_common.sh@930 -- # kill -0 58176 00:26:29.654 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (58176) - No such process 00:26:29.654 Process with pid 58176 is not found 00:26:29.654 08:23:02 -- common/autotest_common.sh@953 -- # echo 'Process with pid 58176 is not found' 00:26:29.654 08:23:02 -- event/cpu_locks.sh@18 -- # rm -f 00:26:29.654 00:26:29.654 real 0m18.763s 00:26:29.654 user 0m32.753s 00:26:29.654 sys 0m4.817s 00:26:29.654 08:23:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:29.654 08:23:02 -- common/autotest_common.sh@10 -- # set +x 00:26:29.654 ************************************ 00:26:29.654 END TEST cpu_locks 00:26:29.654 ************************************ 00:26:29.654 00:26:29.654 real 0m47.496s 00:26:29.654 user 1m32.307s 00:26:29.654 sys 0m8.666s 00:26:29.654 08:23:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:29.654 08:23:02 -- common/autotest_common.sh@10 -- # set +x 00:26:29.654 ************************************ 00:26:29.654 END TEST event 00:26:29.654 ************************************ 00:26:29.654 08:23:02 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:26:29.654 08:23:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:29.654 08:23:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:29.654 08:23:02 -- common/autotest_common.sh@10 -- # set +x 00:26:29.654 ************************************ 00:26:29.654 START TEST thread 00:26:29.654 ************************************ 00:26:29.654 08:23:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:26:29.913 * Looking for test storage... 00:26:29.913 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:26:29.913 08:23:03 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:26:29.913 08:23:03 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:26:29.913 08:23:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:29.913 08:23:03 -- common/autotest_common.sh@10 -- # set +x 00:26:29.913 ************************************ 00:26:29.913 START TEST thread_poller_perf 00:26:29.913 ************************************ 00:26:29.913 08:23:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:26:29.913 [2024-04-17 08:23:03.127213] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:29.913 [2024-04-17 08:23:03.127295] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58322 ] 00:26:30.171 [2024-04-17 08:23:03.269237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.171 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:26:30.171 [2024-04-17 08:23:03.374919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.545 ====================================== 00:26:31.545 busy:2298056348 (cyc) 00:26:31.545 total_run_count: 309000 00:26:31.545 tsc_hz: 2290000000 (cyc) 00:26:31.545 ====================================== 00:26:31.545 poller_cost: 7437 (cyc), 3247 (nsec) 00:26:31.545 00:26:31.545 real 0m1.383s 00:26:31.545 user 0m1.229s 00:26:31.545 sys 0m0.046s 00:26:31.545 08:23:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:31.545 08:23:04 -- common/autotest_common.sh@10 -- # set +x 00:26:31.545 ************************************ 00:26:31.545 END TEST thread_poller_perf 00:26:31.545 ************************************ 00:26:31.545 08:23:04 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:26:31.545 08:23:04 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:26:31.545 08:23:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:31.545 08:23:04 -- common/autotest_common.sh@10 -- # set +x 00:26:31.545 ************************************ 00:26:31.545 START TEST thread_poller_perf 00:26:31.545 ************************************ 00:26:31.545 08:23:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:26:31.545 [2024-04-17 08:23:04.559377] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:31.545 [2024-04-17 08:23:04.559480] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58352 ] 00:26:31.545 [2024-04-17 08:23:04.688670] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.545 [2024-04-17 08:23:04.792313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.545 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:26:32.917 ====================================== 00:26:32.917 busy:2292410370 (cyc) 00:26:32.917 total_run_count: 4324000 00:26:32.917 tsc_hz: 2290000000 (cyc) 00:26:32.917 ====================================== 00:26:32.917 poller_cost: 530 (cyc), 231 (nsec) 00:26:32.917 00:26:32.917 real 0m1.355s 00:26:32.917 user 0m1.206s 00:26:32.917 sys 0m0.042s 00:26:32.917 08:23:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:32.917 08:23:05 -- common/autotest_common.sh@10 -- # set +x 00:26:32.917 ************************************ 00:26:32.917 END TEST thread_poller_perf 00:26:32.917 ************************************ 00:26:32.917 08:23:05 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:26:32.917 00:26:32.917 real 0m2.960s 00:26:32.917 user 0m2.518s 00:26:32.917 sys 0m0.239s 00:26:32.917 08:23:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:32.917 08:23:05 -- common/autotest_common.sh@10 -- # set +x 00:26:32.917 ************************************ 00:26:32.917 END TEST thread 00:26:32.917 ************************************ 00:26:32.917 08:23:05 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:26:32.917 08:23:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:32.917 08:23:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:32.917 08:23:05 -- common/autotest_common.sh@10 -- # set +x 00:26:32.917 ************************************ 00:26:32.918 START TEST accel 00:26:32.918 ************************************ 00:26:32.918 08:23:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:26:32.918 * Looking for test storage... 00:26:32.918 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:26:32.918 08:23:06 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:26:32.918 08:23:06 -- accel/accel.sh@74 -- # get_expected_opcs 00:26:32.918 08:23:06 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:26:32.918 08:23:06 -- accel/accel.sh@59 -- # spdk_tgt_pid=58431 00:26:32.918 08:23:06 -- accel/accel.sh@60 -- # waitforlisten 58431 00:26:32.918 08:23:06 -- common/autotest_common.sh@819 -- # '[' -z 58431 ']' 00:26:32.918 08:23:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:32.918 08:23:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:32.918 08:23:06 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:26:32.918 08:23:06 -- accel/accel.sh@58 -- # build_accel_config 00:26:32.918 08:23:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:32.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:32.918 08:23:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:32.918 08:23:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:26:32.918 08:23:06 -- common/autotest_common.sh@10 -- # set +x 00:26:32.918 08:23:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:26:32.918 08:23:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:26:32.918 08:23:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:26:32.918 08:23:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:26:32.918 08:23:06 -- accel/accel.sh@41 -- # local IFS=, 00:26:32.918 08:23:06 -- accel/accel.sh@42 -- # jq -r . 00:26:32.918 [2024-04-17 08:23:06.184750] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:32.918 [2024-04-17 08:23:06.184832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58431 ] 00:26:33.175 [2024-04-17 08:23:06.321540] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.175 [2024-04-17 08:23:06.425534] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:33.175 [2024-04-17 08:23:06.425684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.108 08:23:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:34.108 08:23:07 -- common/autotest_common.sh@852 -- # return 0 00:26:34.108 08:23:07 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:26:34.108 08:23:07 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:26:34.109 08:23:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:34.109 08:23:07 -- common/autotest_common.sh@10 -- # set +x 00:26:34.109 08:23:07 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:26:34.109 08:23:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:34.109 08:23:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:26:34.109 08:23:07 -- accel/accel.sh@64 -- # IFS== 00:26:34.109 08:23:07 -- accel/accel.sh@64 -- # read -r opc module 00:26:34.109 08:23:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:26:34.109 08:23:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:26:34.109 08:23:07 -- accel/accel.sh@64 -- # IFS== 00:26:34.109 08:23:07 -- accel/accel.sh@64 -- # read -r opc module 00:26:34.109 08:23:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:26:34.109 08:23:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:26:34.109 08:23:07 -- accel/accel.sh@64 -- # IFS== 00:26:34.109 08:23:07 -- accel/accel.sh@64 -- # read -r opc module 00:26:34.109 08:23:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:26:34.109 08:23:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:26:34.109 08:23:07 -- accel/accel.sh@64 -- # IFS== 00:26:34.109 08:23:07 -- accel/accel.sh@64 -- # read -r opc module 00:26:34.109 08:23:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:26:34.109 08:23:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:26:34.109 08:23:07 -- accel/accel.sh@64 -- # IFS== 00:26:34.109 08:23:07 -- accel/accel.sh@64 -- # read -r opc module 00:26:34.109 08:23:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:26:34.109 08:23:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:26:34.109 08:23:07 -- accel/accel.sh@64 -- # IFS== 00:26:34.109 08:23:07 -- accel/accel.sh@64 -- # read -r opc module 00:26:34.109 08:23:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:26:34.109 08:23:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:26:34.109 08:23:07 -- accel/accel.sh@64 -- # IFS== 00:26:34.109 08:23:07 -- accel/accel.sh@64 -- # read -r opc module 00:26:34.109 08:23:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:26:34.109 08:23:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:26:34.109 08:23:07 -- accel/accel.sh@64 -- # IFS== 00:26:34.109 08:23:07 -- accel/accel.sh@64 -- # read -r opc module 00:26:34.109 08:23:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:26:34.109 08:23:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:26:34.109 08:23:07 -- accel/accel.sh@64 -- # IFS== 00:26:34.109 08:23:07 -- accel/accel.sh@64 -- # read -r opc module 00:26:34.109 08:23:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:26:34.109 08:23:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:26:34.109 08:23:07 -- accel/accel.sh@64 -- # IFS== 00:26:34.109 08:23:07 -- accel/accel.sh@64 -- # read -r opc module 00:26:34.109 08:23:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:26:34.109 08:23:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:26:34.109 08:23:07 -- accel/accel.sh@64 -- # IFS== 00:26:34.109 08:23:07 -- accel/accel.sh@64 -- # read -r opc module 00:26:34.109 08:23:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:26:34.109 08:23:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:26:34.109 08:23:07 -- accel/accel.sh@64 -- # IFS== 00:26:34.109 08:23:07 -- accel/accel.sh@64 -- # read -r opc module 00:26:34.109 08:23:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:26:34.109 08:23:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:26:34.109 08:23:07 -- accel/accel.sh@64 -- # IFS== 00:26:34.109 08:23:07 -- accel/accel.sh@64 -- # read -r opc module 00:26:34.109 08:23:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:26:34.109 08:23:07 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:26:34.109 08:23:07 -- accel/accel.sh@64 -- # IFS== 00:26:34.109 08:23:07 -- accel/accel.sh@64 -- # read -r opc module 00:26:34.109 08:23:07 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:26:34.109 08:23:07 -- accel/accel.sh@67 -- # killprocess 58431 00:26:34.109 08:23:07 -- common/autotest_common.sh@926 -- # '[' -z 58431 ']' 00:26:34.109 08:23:07 -- common/autotest_common.sh@930 -- # kill -0 58431 00:26:34.109 08:23:07 -- common/autotest_common.sh@931 -- # uname 00:26:34.109 08:23:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:34.109 08:23:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58431 00:26:34.109 08:23:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:34.109 08:23:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:34.109 killing process with pid 58431 00:26:34.109 08:23:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58431' 00:26:34.109 08:23:07 -- common/autotest_common.sh@945 -- # kill 58431 00:26:34.109 08:23:07 -- common/autotest_common.sh@950 -- # wait 58431 00:26:34.367 08:23:07 -- accel/accel.sh@68 -- # trap - ERR 00:26:34.367 08:23:07 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:26:34.367 08:23:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:34.367 08:23:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:34.367 08:23:07 -- common/autotest_common.sh@10 -- # set +x 00:26:34.367 08:23:07 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:26:34.367 08:23:07 -- accel/accel.sh@12 -- # build_accel_config 00:26:34.367 08:23:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:26:34.367 08:23:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:26:34.367 08:23:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:26:34.367 08:23:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:26:34.367 08:23:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:26:34.367 08:23:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:26:34.367 08:23:07 -- accel/accel.sh@41 -- # local IFS=, 00:26:34.367 08:23:07 -- accel/accel.sh@42 -- # jq -r . 00:26:34.367 08:23:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:34.367 08:23:07 -- common/autotest_common.sh@10 -- # set +x 00:26:34.367 08:23:07 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:26:34.367 08:23:07 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:26:34.367 08:23:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:34.367 08:23:07 -- common/autotest_common.sh@10 -- # set +x 00:26:34.367 ************************************ 00:26:34.367 START TEST accel_missing_filename 00:26:34.367 ************************************ 00:26:34.367 08:23:07 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:26:34.367 08:23:07 -- common/autotest_common.sh@640 -- # local es=0 00:26:34.367 08:23:07 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:26:34.367 08:23:07 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:26:34.367 08:23:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:34.367 08:23:07 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:26:34.367 08:23:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:34.367 08:23:07 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:26:34.367 08:23:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:26:34.367 08:23:07 -- accel/accel.sh@12 -- # build_accel_config 00:26:34.367 08:23:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:26:34.367 08:23:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:26:34.367 08:23:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:26:34.367 08:23:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:26:34.367 08:23:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:26:34.367 08:23:07 -- accel/accel.sh@41 -- # local IFS=, 00:26:34.368 08:23:07 -- accel/accel.sh@42 -- # jq -r . 00:26:34.368 [2024-04-17 08:23:07.628706] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:34.368 [2024-04-17 08:23:07.628839] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58495 ] 00:26:34.626 [2024-04-17 08:23:07.772888] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.626 [2024-04-17 08:23:07.875969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.626 [2024-04-17 08:23:07.919381] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:34.885 [2024-04-17 08:23:07.980038] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:26:34.885 A filename is required. 00:26:34.885 08:23:08 -- common/autotest_common.sh@643 -- # es=234 00:26:34.885 08:23:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:34.885 08:23:08 -- common/autotest_common.sh@652 -- # es=106 00:26:34.885 08:23:08 -- common/autotest_common.sh@653 -- # case "$es" in 00:26:34.885 08:23:08 -- common/autotest_common.sh@660 -- # es=1 00:26:34.885 08:23:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:34.885 00:26:34.885 real 0m0.485s 00:26:34.885 user 0m0.328s 00:26:34.885 sys 0m0.097s 00:26:34.885 08:23:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:34.885 08:23:08 -- common/autotest_common.sh@10 -- # set +x 00:26:34.885 ************************************ 00:26:34.885 END TEST accel_missing_filename 00:26:34.885 ************************************ 00:26:34.885 08:23:08 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:26:34.885 08:23:08 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:26:34.885 08:23:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:34.885 08:23:08 -- common/autotest_common.sh@10 -- # set +x 00:26:34.885 ************************************ 00:26:34.885 START TEST accel_compress_verify 00:26:34.885 ************************************ 00:26:34.885 08:23:08 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:26:34.885 08:23:08 -- common/autotest_common.sh@640 -- # local es=0 00:26:34.885 08:23:08 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:26:34.886 08:23:08 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:26:34.886 08:23:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:34.886 08:23:08 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:26:34.886 08:23:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:34.886 08:23:08 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:26:34.886 08:23:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:26:34.886 08:23:08 -- accel/accel.sh@12 -- # build_accel_config 00:26:34.886 08:23:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:26:34.886 08:23:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:26:34.886 08:23:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:26:34.886 08:23:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:26:34.886 08:23:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:26:34.886 08:23:08 -- accel/accel.sh@41 -- # local IFS=, 00:26:34.886 08:23:08 -- accel/accel.sh@42 -- # jq -r . 00:26:34.886 [2024-04-17 08:23:08.171152] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:34.886 [2024-04-17 08:23:08.171252] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58525 ] 00:26:35.144 [2024-04-17 08:23:08.314660] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.144 [2024-04-17 08:23:08.420389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:35.144 [2024-04-17 08:23:08.464756] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:35.402 [2024-04-17 08:23:08.526444] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:26:35.402 00:26:35.402 Compression does not support the verify option, aborting. 00:26:35.402 08:23:08 -- common/autotest_common.sh@643 -- # es=161 00:26:35.402 08:23:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:35.402 08:23:08 -- common/autotest_common.sh@652 -- # es=33 00:26:35.402 08:23:08 -- common/autotest_common.sh@653 -- # case "$es" in 00:26:35.402 08:23:08 -- common/autotest_common.sh@660 -- # es=1 00:26:35.402 08:23:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:35.402 00:26:35.402 real 0m0.488s 00:26:35.402 user 0m0.338s 00:26:35.402 sys 0m0.092s 00:26:35.402 08:23:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:35.402 08:23:08 -- common/autotest_common.sh@10 -- # set +x 00:26:35.402 ************************************ 00:26:35.402 END TEST accel_compress_verify 00:26:35.402 ************************************ 00:26:35.402 08:23:08 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:26:35.402 08:23:08 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:26:35.402 08:23:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:35.402 08:23:08 -- common/autotest_common.sh@10 -- # set +x 00:26:35.402 ************************************ 00:26:35.402 START TEST accel_wrong_workload 00:26:35.402 ************************************ 00:26:35.402 08:23:08 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:26:35.402 08:23:08 -- common/autotest_common.sh@640 -- # local es=0 00:26:35.402 08:23:08 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:26:35.402 08:23:08 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:26:35.402 08:23:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:35.402 08:23:08 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:26:35.402 08:23:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:35.402 08:23:08 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:26:35.402 08:23:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:26:35.402 08:23:08 -- accel/accel.sh@12 -- # build_accel_config 00:26:35.402 08:23:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:26:35.402 08:23:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:26:35.402 08:23:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:26:35.402 08:23:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:26:35.402 08:23:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:26:35.402 08:23:08 -- accel/accel.sh@41 -- # local IFS=, 00:26:35.402 08:23:08 -- accel/accel.sh@42 -- # jq -r . 00:26:35.402 Unsupported workload type: foobar 00:26:35.402 [2024-04-17 08:23:08.729740] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:26:35.660 accel_perf options: 00:26:35.660 [-h help message] 00:26:35.660 [-q queue depth per core] 00:26:35.660 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:26:35.660 [-T number of threads per core 00:26:35.660 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:26:35.660 [-t time in seconds] 00:26:35.660 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:26:35.660 [ dif_verify, , dif_generate, dif_generate_copy 00:26:35.660 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:26:35.660 [-l for compress/decompress workloads, name of uncompressed input file 00:26:35.660 [-S for crc32c workload, use this seed value (default 0) 00:26:35.660 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:26:35.660 [-f for fill workload, use this BYTE value (default 255) 00:26:35.660 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:26:35.660 [-y verify result if this switch is on] 00:26:35.660 [-a tasks to allocate per core (default: same value as -q)] 00:26:35.660 Can be used to spread operations across a wider range of memory. 00:26:35.660 08:23:08 -- common/autotest_common.sh@643 -- # es=1 00:26:35.660 08:23:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:35.660 08:23:08 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:35.660 08:23:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:35.660 00:26:35.660 real 0m0.041s 00:26:35.660 user 0m0.018s 00:26:35.660 sys 0m0.022s 00:26:35.660 08:23:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:35.660 08:23:08 -- common/autotest_common.sh@10 -- # set +x 00:26:35.660 ************************************ 00:26:35.660 END TEST accel_wrong_workload 00:26:35.660 ************************************ 00:26:35.660 08:23:08 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:26:35.660 08:23:08 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:26:35.660 08:23:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:35.660 08:23:08 -- common/autotest_common.sh@10 -- # set +x 00:26:35.660 ************************************ 00:26:35.660 START TEST accel_negative_buffers 00:26:35.660 ************************************ 00:26:35.660 08:23:08 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:26:35.660 08:23:08 -- common/autotest_common.sh@640 -- # local es=0 00:26:35.660 08:23:08 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:26:35.660 08:23:08 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:26:35.660 08:23:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:35.660 08:23:08 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:26:35.660 08:23:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:35.660 08:23:08 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:26:35.660 08:23:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:26:35.660 08:23:08 -- accel/accel.sh@12 -- # build_accel_config 00:26:35.660 08:23:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:26:35.660 08:23:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:26:35.660 08:23:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:26:35.660 08:23:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:26:35.660 08:23:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:26:35.660 08:23:08 -- accel/accel.sh@41 -- # local IFS=, 00:26:35.660 08:23:08 -- accel/accel.sh@42 -- # jq -r . 00:26:35.660 -x option must be non-negative. 00:26:35.660 [2024-04-17 08:23:08.827618] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:26:35.660 accel_perf options: 00:26:35.660 [-h help message] 00:26:35.660 [-q queue depth per core] 00:26:35.660 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:26:35.660 [-T number of threads per core 00:26:35.660 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:26:35.660 [-t time in seconds] 00:26:35.660 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:26:35.660 [ dif_verify, , dif_generate, dif_generate_copy 00:26:35.660 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:26:35.660 [-l for compress/decompress workloads, name of uncompressed input file 00:26:35.660 [-S for crc32c workload, use this seed value (default 0) 00:26:35.660 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:26:35.660 [-f for fill workload, use this BYTE value (default 255) 00:26:35.660 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:26:35.660 [-y verify result if this switch is on] 00:26:35.660 [-a tasks to allocate per core (default: same value as -q)] 00:26:35.660 Can be used to spread operations across a wider range of memory. 00:26:35.660 08:23:08 -- common/autotest_common.sh@643 -- # es=1 00:26:35.660 08:23:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:35.660 08:23:08 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:35.660 08:23:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:35.660 00:26:35.660 real 0m0.040s 00:26:35.660 user 0m0.023s 00:26:35.660 sys 0m0.017s 00:26:35.660 08:23:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:35.660 08:23:08 -- common/autotest_common.sh@10 -- # set +x 00:26:35.660 ************************************ 00:26:35.660 END TEST accel_negative_buffers 00:26:35.660 ************************************ 00:26:35.660 08:23:08 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:26:35.660 08:23:08 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:26:35.660 08:23:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:35.660 08:23:08 -- common/autotest_common.sh@10 -- # set +x 00:26:35.660 ************************************ 00:26:35.660 START TEST accel_crc32c 00:26:35.660 ************************************ 00:26:35.660 08:23:08 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:26:35.660 08:23:08 -- accel/accel.sh@16 -- # local accel_opc 00:26:35.660 08:23:08 -- accel/accel.sh@17 -- # local accel_module 00:26:35.660 08:23:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:26:35.660 08:23:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:26:35.660 08:23:08 -- accel/accel.sh@12 -- # build_accel_config 00:26:35.660 08:23:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:26:35.660 08:23:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:26:35.660 08:23:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:26:35.660 08:23:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:26:35.660 08:23:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:26:35.660 08:23:08 -- accel/accel.sh@41 -- # local IFS=, 00:26:35.660 08:23:08 -- accel/accel.sh@42 -- # jq -r . 00:26:35.660 [2024-04-17 08:23:08.903858] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:35.661 [2024-04-17 08:23:08.903937] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58583 ] 00:26:35.918 [2024-04-17 08:23:09.045536] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.918 [2024-04-17 08:23:09.151943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.294 08:23:10 -- accel/accel.sh@18 -- # out=' 00:26:37.294 SPDK Configuration: 00:26:37.294 Core mask: 0x1 00:26:37.294 00:26:37.294 Accel Perf Configuration: 00:26:37.294 Workload Type: crc32c 00:26:37.294 CRC-32C seed: 32 00:26:37.294 Transfer size: 4096 bytes 00:26:37.294 Vector count 1 00:26:37.294 Module: software 00:26:37.294 Queue depth: 32 00:26:37.294 Allocate depth: 32 00:26:37.294 # threads/core: 1 00:26:37.294 Run time: 1 seconds 00:26:37.294 Verify: Yes 00:26:37.294 00:26:37.294 Running for 1 seconds... 00:26:37.294 00:26:37.294 Core,Thread Transfers Bandwidth Failed Miscompares 00:26:37.294 ------------------------------------------------------------------------------------ 00:26:37.294 0,0 451296/s 1762 MiB/s 0 0 00:26:37.294 ==================================================================================== 00:26:37.294 Total 451296/s 1762 MiB/s 0 0' 00:26:37.294 08:23:10 -- accel/accel.sh@20 -- # IFS=: 00:26:37.294 08:23:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:26:37.294 08:23:10 -- accel/accel.sh@20 -- # read -r var val 00:26:37.294 08:23:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:26:37.294 08:23:10 -- accel/accel.sh@12 -- # build_accel_config 00:26:37.294 08:23:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:26:37.294 08:23:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:26:37.294 08:23:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:26:37.294 08:23:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:26:37.294 08:23:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:26:37.294 08:23:10 -- accel/accel.sh@41 -- # local IFS=, 00:26:37.294 08:23:10 -- accel/accel.sh@42 -- # jq -r . 00:26:37.294 [2024-04-17 08:23:10.379014] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:37.294 [2024-04-17 08:23:10.379102] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58603 ] 00:26:37.294 [2024-04-17 08:23:10.520966] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.294 [2024-04-17 08:23:10.624840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.553 08:23:10 -- accel/accel.sh@21 -- # val= 00:26:37.553 08:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:26:37.553 08:23:10 -- accel/accel.sh@20 -- # IFS=: 00:26:37.553 08:23:10 -- accel/accel.sh@20 -- # read -r var val 00:26:37.553 08:23:10 -- accel/accel.sh@21 -- # val= 00:26:37.553 08:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:26:37.553 08:23:10 -- accel/accel.sh@20 -- # IFS=: 00:26:37.553 08:23:10 -- accel/accel.sh@20 -- # read -r var val 00:26:37.553 08:23:10 -- accel/accel.sh@21 -- # val=0x1 00:26:37.553 08:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:26:37.553 08:23:10 -- accel/accel.sh@20 -- # IFS=: 00:26:37.553 08:23:10 -- accel/accel.sh@20 -- # read -r var val 00:26:37.553 08:23:10 -- accel/accel.sh@21 -- # val= 00:26:37.553 08:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:26:37.553 08:23:10 -- accel/accel.sh@20 -- # IFS=: 00:26:37.553 08:23:10 -- accel/accel.sh@20 -- # read -r var val 00:26:37.553 08:23:10 -- accel/accel.sh@21 -- # val= 00:26:37.553 08:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:26:37.553 08:23:10 -- accel/accel.sh@20 -- # IFS=: 00:26:37.553 08:23:10 -- accel/accel.sh@20 -- # read -r var val 00:26:37.553 08:23:10 -- accel/accel.sh@21 -- # val=crc32c 00:26:37.553 08:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:26:37.553 08:23:10 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:26:37.553 08:23:10 -- accel/accel.sh@20 -- # IFS=: 00:26:37.553 08:23:10 -- accel/accel.sh@20 -- # read -r var val 00:26:37.553 08:23:10 -- accel/accel.sh@21 -- # val=32 00:26:37.553 08:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:26:37.553 08:23:10 -- accel/accel.sh@20 -- # IFS=: 00:26:37.553 08:23:10 -- accel/accel.sh@20 -- # read -r var val 00:26:37.553 08:23:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:26:37.553 08:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:26:37.553 08:23:10 -- accel/accel.sh@20 -- # IFS=: 00:26:37.553 08:23:10 -- accel/accel.sh@20 -- # read -r var val 00:26:37.553 08:23:10 -- accel/accel.sh@21 -- # val= 00:26:37.553 08:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:26:37.553 08:23:10 -- accel/accel.sh@20 -- # IFS=: 00:26:37.553 08:23:10 -- accel/accel.sh@20 -- # read -r var val 00:26:37.553 08:23:10 -- accel/accel.sh@21 -- # val=software 00:26:37.553 08:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:26:37.553 08:23:10 -- accel/accel.sh@23 -- # accel_module=software 00:26:37.553 08:23:10 -- accel/accel.sh@20 -- # IFS=: 00:26:37.553 08:23:10 -- accel/accel.sh@20 -- # read -r var val 00:26:37.553 08:23:10 -- accel/accel.sh@21 -- # val=32 00:26:37.553 08:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:26:37.553 08:23:10 -- accel/accel.sh@20 -- # IFS=: 00:26:37.553 08:23:10 -- accel/accel.sh@20 -- # read -r var val 00:26:37.553 08:23:10 -- accel/accel.sh@21 -- # val=32 00:26:37.553 08:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:26:37.553 08:23:10 -- accel/accel.sh@20 -- # IFS=: 00:26:37.553 08:23:10 -- accel/accel.sh@20 -- # read -r var val 00:26:37.553 08:23:10 -- accel/accel.sh@21 -- # val=1 00:26:37.553 08:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:26:37.553 08:23:10 -- accel/accel.sh@20 -- # IFS=: 00:26:37.553 08:23:10 -- accel/accel.sh@20 -- # read -r var val 00:26:37.553 08:23:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:26:37.553 08:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:26:37.553 08:23:10 -- accel/accel.sh@20 -- # IFS=: 00:26:37.553 08:23:10 -- accel/accel.sh@20 -- # read -r var val 00:26:37.553 08:23:10 -- accel/accel.sh@21 -- # val=Yes 00:26:37.553 08:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:26:37.553 08:23:10 -- accel/accel.sh@20 -- # IFS=: 00:26:37.553 08:23:10 -- accel/accel.sh@20 -- # read -r var val 00:26:37.553 08:23:10 -- accel/accel.sh@21 -- # val= 00:26:37.553 08:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:26:37.553 08:23:10 -- accel/accel.sh@20 -- # IFS=: 00:26:37.553 08:23:10 -- accel/accel.sh@20 -- # read -r var val 00:26:37.553 08:23:10 -- accel/accel.sh@21 -- # val= 00:26:37.553 08:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:26:37.553 08:23:10 -- accel/accel.sh@20 -- # IFS=: 00:26:37.553 08:23:10 -- accel/accel.sh@20 -- # read -r var val 00:26:38.929 08:23:11 -- accel/accel.sh@21 -- # val= 00:26:38.929 08:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:26:38.929 08:23:11 -- accel/accel.sh@20 -- # IFS=: 00:26:38.929 08:23:11 -- accel/accel.sh@20 -- # read -r var val 00:26:38.929 08:23:11 -- accel/accel.sh@21 -- # val= 00:26:38.929 08:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:26:38.929 08:23:11 -- accel/accel.sh@20 -- # IFS=: 00:26:38.929 08:23:11 -- accel/accel.sh@20 -- # read -r var val 00:26:38.929 08:23:11 -- accel/accel.sh@21 -- # val= 00:26:38.929 08:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:26:38.929 08:23:11 -- accel/accel.sh@20 -- # IFS=: 00:26:38.929 08:23:11 -- accel/accel.sh@20 -- # read -r var val 00:26:38.929 08:23:11 -- accel/accel.sh@21 -- # val= 00:26:38.929 08:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:26:38.929 08:23:11 -- accel/accel.sh@20 -- # IFS=: 00:26:38.929 08:23:11 -- accel/accel.sh@20 -- # read -r var val 00:26:38.929 08:23:11 -- accel/accel.sh@21 -- # val= 00:26:38.929 08:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:26:38.929 08:23:11 -- accel/accel.sh@20 -- # IFS=: 00:26:38.929 08:23:11 -- accel/accel.sh@20 -- # read -r var val 00:26:38.929 08:23:11 -- accel/accel.sh@21 -- # val= 00:26:38.929 08:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:26:38.929 08:23:11 -- accel/accel.sh@20 -- # IFS=: 00:26:38.929 08:23:11 -- accel/accel.sh@20 -- # read -r var val 00:26:38.929 08:23:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:26:38.929 08:23:11 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:26:38.929 08:23:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:38.929 00:26:38.929 real 0m2.949s 00:26:38.929 user 0m2.557s 00:26:38.929 sys 0m0.187s 00:26:38.929 08:23:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:38.929 08:23:11 -- common/autotest_common.sh@10 -- # set +x 00:26:38.929 ************************************ 00:26:38.929 END TEST accel_crc32c 00:26:38.929 ************************************ 00:26:38.929 08:23:11 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:26:38.929 08:23:11 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:26:38.929 08:23:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:38.929 08:23:11 -- common/autotest_common.sh@10 -- # set +x 00:26:38.929 ************************************ 00:26:38.929 START TEST accel_crc32c_C2 00:26:38.929 ************************************ 00:26:38.929 08:23:11 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:26:38.929 08:23:11 -- accel/accel.sh@16 -- # local accel_opc 00:26:38.929 08:23:11 -- accel/accel.sh@17 -- # local accel_module 00:26:38.929 08:23:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:26:38.929 08:23:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:26:38.929 08:23:11 -- accel/accel.sh@12 -- # build_accel_config 00:26:38.929 08:23:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:26:38.929 08:23:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:26:38.929 08:23:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:26:38.929 08:23:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:26:38.929 08:23:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:26:38.929 08:23:11 -- accel/accel.sh@41 -- # local IFS=, 00:26:38.929 08:23:11 -- accel/accel.sh@42 -- # jq -r . 00:26:38.929 [2024-04-17 08:23:11.912771] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:38.929 [2024-04-17 08:23:11.912865] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58632 ] 00:26:38.929 [2024-04-17 08:23:12.050147] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.929 [2024-04-17 08:23:12.156297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.305 08:23:13 -- accel/accel.sh@18 -- # out=' 00:26:40.305 SPDK Configuration: 00:26:40.305 Core mask: 0x1 00:26:40.305 00:26:40.305 Accel Perf Configuration: 00:26:40.305 Workload Type: crc32c 00:26:40.305 CRC-32C seed: 0 00:26:40.305 Transfer size: 4096 bytes 00:26:40.305 Vector count 2 00:26:40.305 Module: software 00:26:40.305 Queue depth: 32 00:26:40.305 Allocate depth: 32 00:26:40.305 # threads/core: 1 00:26:40.305 Run time: 1 seconds 00:26:40.305 Verify: Yes 00:26:40.305 00:26:40.305 Running for 1 seconds... 00:26:40.305 00:26:40.305 Core,Thread Transfers Bandwidth Failed Miscompares 00:26:40.305 ------------------------------------------------------------------------------------ 00:26:40.305 0,0 355168/s 2774 MiB/s 0 0 00:26:40.305 ==================================================================================== 00:26:40.305 Total 355168/s 1387 MiB/s 0 0' 00:26:40.305 08:23:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:26:40.305 08:23:13 -- accel/accel.sh@20 -- # IFS=: 00:26:40.305 08:23:13 -- accel/accel.sh@20 -- # read -r var val 00:26:40.305 08:23:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:26:40.305 08:23:13 -- accel/accel.sh@12 -- # build_accel_config 00:26:40.305 08:23:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:26:40.305 08:23:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:26:40.305 08:23:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:26:40.305 08:23:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:26:40.305 08:23:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:26:40.305 08:23:13 -- accel/accel.sh@41 -- # local IFS=, 00:26:40.305 08:23:13 -- accel/accel.sh@42 -- # jq -r . 00:26:40.305 [2024-04-17 08:23:13.394068] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:40.305 [2024-04-17 08:23:13.394156] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58657 ] 00:26:40.305 [2024-04-17 08:23:13.535899] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.564 [2024-04-17 08:23:13.640630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.564 08:23:13 -- accel/accel.sh@21 -- # val= 00:26:40.564 08:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:26:40.564 08:23:13 -- accel/accel.sh@20 -- # IFS=: 00:26:40.564 08:23:13 -- accel/accel.sh@20 -- # read -r var val 00:26:40.564 08:23:13 -- accel/accel.sh@21 -- # val= 00:26:40.564 08:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:26:40.564 08:23:13 -- accel/accel.sh@20 -- # IFS=: 00:26:40.564 08:23:13 -- accel/accel.sh@20 -- # read -r var val 00:26:40.564 08:23:13 -- accel/accel.sh@21 -- # val=0x1 00:26:40.564 08:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:26:40.564 08:23:13 -- accel/accel.sh@20 -- # IFS=: 00:26:40.564 08:23:13 -- accel/accel.sh@20 -- # read -r var val 00:26:40.564 08:23:13 -- accel/accel.sh@21 -- # val= 00:26:40.564 08:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:26:40.564 08:23:13 -- accel/accel.sh@20 -- # IFS=: 00:26:40.564 08:23:13 -- accel/accel.sh@20 -- # read -r var val 00:26:40.564 08:23:13 -- accel/accel.sh@21 -- # val= 00:26:40.564 08:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:26:40.564 08:23:13 -- accel/accel.sh@20 -- # IFS=: 00:26:40.564 08:23:13 -- accel/accel.sh@20 -- # read -r var val 00:26:40.564 08:23:13 -- accel/accel.sh@21 -- # val=crc32c 00:26:40.564 08:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:26:40.564 08:23:13 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:26:40.564 08:23:13 -- accel/accel.sh@20 -- # IFS=: 00:26:40.564 08:23:13 -- accel/accel.sh@20 -- # read -r var val 00:26:40.564 08:23:13 -- accel/accel.sh@21 -- # val=0 00:26:40.564 08:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:26:40.564 08:23:13 -- accel/accel.sh@20 -- # IFS=: 00:26:40.564 08:23:13 -- accel/accel.sh@20 -- # read -r var val 00:26:40.564 08:23:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:26:40.564 08:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:26:40.564 08:23:13 -- accel/accel.sh@20 -- # IFS=: 00:26:40.564 08:23:13 -- accel/accel.sh@20 -- # read -r var val 00:26:40.564 08:23:13 -- accel/accel.sh@21 -- # val= 00:26:40.564 08:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:26:40.564 08:23:13 -- accel/accel.sh@20 -- # IFS=: 00:26:40.564 08:23:13 -- accel/accel.sh@20 -- # read -r var val 00:26:40.564 08:23:13 -- accel/accel.sh@21 -- # val=software 00:26:40.564 08:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:26:40.564 08:23:13 -- accel/accel.sh@23 -- # accel_module=software 00:26:40.564 08:23:13 -- accel/accel.sh@20 -- # IFS=: 00:26:40.564 08:23:13 -- accel/accel.sh@20 -- # read -r var val 00:26:40.564 08:23:13 -- accel/accel.sh@21 -- # val=32 00:26:40.564 08:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:26:40.564 08:23:13 -- accel/accel.sh@20 -- # IFS=: 00:26:40.564 08:23:13 -- accel/accel.sh@20 -- # read -r var val 00:26:40.564 08:23:13 -- accel/accel.sh@21 -- # val=32 00:26:40.564 08:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:26:40.564 08:23:13 -- accel/accel.sh@20 -- # IFS=: 00:26:40.564 08:23:13 -- accel/accel.sh@20 -- # read -r var val 00:26:40.564 08:23:13 -- accel/accel.sh@21 -- # val=1 00:26:40.564 08:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:26:40.564 08:23:13 -- accel/accel.sh@20 -- # IFS=: 00:26:40.564 08:23:13 -- accel/accel.sh@20 -- # read -r var val 00:26:40.564 08:23:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:26:40.564 08:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:26:40.564 08:23:13 -- accel/accel.sh@20 -- # IFS=: 00:26:40.564 08:23:13 -- accel/accel.sh@20 -- # read -r var val 00:26:40.564 08:23:13 -- accel/accel.sh@21 -- # val=Yes 00:26:40.564 08:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:26:40.564 08:23:13 -- accel/accel.sh@20 -- # IFS=: 00:26:40.564 08:23:13 -- accel/accel.sh@20 -- # read -r var val 00:26:40.564 08:23:13 -- accel/accel.sh@21 -- # val= 00:26:40.564 08:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:26:40.564 08:23:13 -- accel/accel.sh@20 -- # IFS=: 00:26:40.564 08:23:13 -- accel/accel.sh@20 -- # read -r var val 00:26:40.564 08:23:13 -- accel/accel.sh@21 -- # val= 00:26:40.564 08:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:26:40.564 08:23:13 -- accel/accel.sh@20 -- # IFS=: 00:26:40.564 08:23:13 -- accel/accel.sh@20 -- # read -r var val 00:26:41.938 08:23:14 -- accel/accel.sh@21 -- # val= 00:26:41.938 08:23:14 -- accel/accel.sh@22 -- # case "$var" in 00:26:41.938 08:23:14 -- accel/accel.sh@20 -- # IFS=: 00:26:41.938 08:23:14 -- accel/accel.sh@20 -- # read -r var val 00:26:41.938 08:23:14 -- accel/accel.sh@21 -- # val= 00:26:41.938 08:23:14 -- accel/accel.sh@22 -- # case "$var" in 00:26:41.938 08:23:14 -- accel/accel.sh@20 -- # IFS=: 00:26:41.938 08:23:14 -- accel/accel.sh@20 -- # read -r var val 00:26:41.938 08:23:14 -- accel/accel.sh@21 -- # val= 00:26:41.938 08:23:14 -- accel/accel.sh@22 -- # case "$var" in 00:26:41.938 08:23:14 -- accel/accel.sh@20 -- # IFS=: 00:26:41.938 08:23:14 -- accel/accel.sh@20 -- # read -r var val 00:26:41.938 08:23:14 -- accel/accel.sh@21 -- # val= 00:26:41.938 08:23:14 -- accel/accel.sh@22 -- # case "$var" in 00:26:41.938 08:23:14 -- accel/accel.sh@20 -- # IFS=: 00:26:41.938 08:23:14 -- accel/accel.sh@20 -- # read -r var val 00:26:41.938 08:23:14 -- accel/accel.sh@21 -- # val= 00:26:41.938 08:23:14 -- accel/accel.sh@22 -- # case "$var" in 00:26:41.938 08:23:14 -- accel/accel.sh@20 -- # IFS=: 00:26:41.938 08:23:14 -- accel/accel.sh@20 -- # read -r var val 00:26:41.938 08:23:14 -- accel/accel.sh@21 -- # val= 00:26:41.938 08:23:14 -- accel/accel.sh@22 -- # case "$var" in 00:26:41.938 08:23:14 -- accel/accel.sh@20 -- # IFS=: 00:26:41.938 08:23:14 -- accel/accel.sh@20 -- # read -r var val 00:26:41.938 08:23:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:26:41.938 08:23:14 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:26:41.938 08:23:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:41.938 00:26:41.938 real 0m2.976s 00:26:41.938 user 0m2.573s 00:26:41.938 sys 0m0.200s 00:26:41.938 08:23:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:41.939 08:23:14 -- common/autotest_common.sh@10 -- # set +x 00:26:41.939 ************************************ 00:26:41.939 END TEST accel_crc32c_C2 00:26:41.939 ************************************ 00:26:41.939 08:23:14 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:26:41.939 08:23:14 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:26:41.939 08:23:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:41.939 08:23:14 -- common/autotest_common.sh@10 -- # set +x 00:26:41.939 ************************************ 00:26:41.939 START TEST accel_copy 00:26:41.939 ************************************ 00:26:41.939 08:23:14 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:26:41.939 08:23:14 -- accel/accel.sh@16 -- # local accel_opc 00:26:41.939 08:23:14 -- accel/accel.sh@17 -- # local accel_module 00:26:41.939 08:23:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:26:41.939 08:23:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:26:41.939 08:23:14 -- accel/accel.sh@12 -- # build_accel_config 00:26:41.939 08:23:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:26:41.939 08:23:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:26:41.939 08:23:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:26:41.939 08:23:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:26:41.939 08:23:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:26:41.939 08:23:14 -- accel/accel.sh@41 -- # local IFS=, 00:26:41.939 08:23:14 -- accel/accel.sh@42 -- # jq -r . 00:26:41.939 [2024-04-17 08:23:14.945263] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:41.939 [2024-04-17 08:23:14.945420] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58686 ] 00:26:41.939 [2024-04-17 08:23:15.083611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.939 [2024-04-17 08:23:15.188174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:43.328 08:23:16 -- accel/accel.sh@18 -- # out=' 00:26:43.328 SPDK Configuration: 00:26:43.328 Core mask: 0x1 00:26:43.328 00:26:43.328 Accel Perf Configuration: 00:26:43.328 Workload Type: copy 00:26:43.328 Transfer size: 4096 bytes 00:26:43.328 Vector count 1 00:26:43.328 Module: software 00:26:43.328 Queue depth: 32 00:26:43.328 Allocate depth: 32 00:26:43.328 # threads/core: 1 00:26:43.328 Run time: 1 seconds 00:26:43.328 Verify: Yes 00:26:43.328 00:26:43.328 Running for 1 seconds... 00:26:43.328 00:26:43.328 Core,Thread Transfers Bandwidth Failed Miscompares 00:26:43.328 ------------------------------------------------------------------------------------ 00:26:43.328 0,0 337344/s 1317 MiB/s 0 0 00:26:43.328 ==================================================================================== 00:26:43.328 Total 337344/s 1317 MiB/s 0 0' 00:26:43.328 08:23:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:26:43.328 08:23:16 -- accel/accel.sh@20 -- # IFS=: 00:26:43.328 08:23:16 -- accel/accel.sh@20 -- # read -r var val 00:26:43.328 08:23:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:26:43.328 08:23:16 -- accel/accel.sh@12 -- # build_accel_config 00:26:43.328 08:23:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:26:43.328 08:23:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:26:43.328 08:23:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:26:43.328 08:23:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:26:43.328 08:23:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:26:43.328 08:23:16 -- accel/accel.sh@41 -- # local IFS=, 00:26:43.328 08:23:16 -- accel/accel.sh@42 -- # jq -r . 00:26:43.328 [2024-04-17 08:23:16.430062] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:43.328 [2024-04-17 08:23:16.430142] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58711 ] 00:26:43.328 [2024-04-17 08:23:16.560956] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.587 [2024-04-17 08:23:16.663163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:43.587 08:23:16 -- accel/accel.sh@21 -- # val= 00:26:43.587 08:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:26:43.587 08:23:16 -- accel/accel.sh@20 -- # IFS=: 00:26:43.587 08:23:16 -- accel/accel.sh@20 -- # read -r var val 00:26:43.587 08:23:16 -- accel/accel.sh@21 -- # val= 00:26:43.587 08:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:26:43.587 08:23:16 -- accel/accel.sh@20 -- # IFS=: 00:26:43.587 08:23:16 -- accel/accel.sh@20 -- # read -r var val 00:26:43.587 08:23:16 -- accel/accel.sh@21 -- # val=0x1 00:26:43.587 08:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:26:43.587 08:23:16 -- accel/accel.sh@20 -- # IFS=: 00:26:43.587 08:23:16 -- accel/accel.sh@20 -- # read -r var val 00:26:43.587 08:23:16 -- accel/accel.sh@21 -- # val= 00:26:43.587 08:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:26:43.587 08:23:16 -- accel/accel.sh@20 -- # IFS=: 00:26:43.587 08:23:16 -- accel/accel.sh@20 -- # read -r var val 00:26:43.587 08:23:16 -- accel/accel.sh@21 -- # val= 00:26:43.587 08:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:26:43.587 08:23:16 -- accel/accel.sh@20 -- # IFS=: 00:26:43.587 08:23:16 -- accel/accel.sh@20 -- # read -r var val 00:26:43.587 08:23:16 -- accel/accel.sh@21 -- # val=copy 00:26:43.587 08:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:26:43.587 08:23:16 -- accel/accel.sh@24 -- # accel_opc=copy 00:26:43.587 08:23:16 -- accel/accel.sh@20 -- # IFS=: 00:26:43.587 08:23:16 -- accel/accel.sh@20 -- # read -r var val 00:26:43.587 08:23:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:26:43.587 08:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:26:43.587 08:23:16 -- accel/accel.sh@20 -- # IFS=: 00:26:43.587 08:23:16 -- accel/accel.sh@20 -- # read -r var val 00:26:43.587 08:23:16 -- accel/accel.sh@21 -- # val= 00:26:43.587 08:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:26:43.587 08:23:16 -- accel/accel.sh@20 -- # IFS=: 00:26:43.587 08:23:16 -- accel/accel.sh@20 -- # read -r var val 00:26:43.587 08:23:16 -- accel/accel.sh@21 -- # val=software 00:26:43.587 08:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:26:43.587 08:23:16 -- accel/accel.sh@23 -- # accel_module=software 00:26:43.587 08:23:16 -- accel/accel.sh@20 -- # IFS=: 00:26:43.587 08:23:16 -- accel/accel.sh@20 -- # read -r var val 00:26:43.587 08:23:16 -- accel/accel.sh@21 -- # val=32 00:26:43.587 08:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:26:43.587 08:23:16 -- accel/accel.sh@20 -- # IFS=: 00:26:43.587 08:23:16 -- accel/accel.sh@20 -- # read -r var val 00:26:43.587 08:23:16 -- accel/accel.sh@21 -- # val=32 00:26:43.587 08:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:26:43.587 08:23:16 -- accel/accel.sh@20 -- # IFS=: 00:26:43.587 08:23:16 -- accel/accel.sh@20 -- # read -r var val 00:26:43.587 08:23:16 -- accel/accel.sh@21 -- # val=1 00:26:43.587 08:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:26:43.587 08:23:16 -- accel/accel.sh@20 -- # IFS=: 00:26:43.587 08:23:16 -- accel/accel.sh@20 -- # read -r var val 00:26:43.587 08:23:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:26:43.587 08:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:26:43.587 08:23:16 -- accel/accel.sh@20 -- # IFS=: 00:26:43.587 08:23:16 -- accel/accel.sh@20 -- # read -r var val 00:26:43.587 08:23:16 -- accel/accel.sh@21 -- # val=Yes 00:26:43.587 08:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:26:43.587 08:23:16 -- accel/accel.sh@20 -- # IFS=: 00:26:43.587 08:23:16 -- accel/accel.sh@20 -- # read -r var val 00:26:43.587 08:23:16 -- accel/accel.sh@21 -- # val= 00:26:43.587 08:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:26:43.587 08:23:16 -- accel/accel.sh@20 -- # IFS=: 00:26:43.587 08:23:16 -- accel/accel.sh@20 -- # read -r var val 00:26:43.587 08:23:16 -- accel/accel.sh@21 -- # val= 00:26:43.587 08:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:26:43.587 08:23:16 -- accel/accel.sh@20 -- # IFS=: 00:26:43.587 08:23:16 -- accel/accel.sh@20 -- # read -r var val 00:26:44.963 08:23:17 -- accel/accel.sh@21 -- # val= 00:26:44.963 08:23:17 -- accel/accel.sh@22 -- # case "$var" in 00:26:44.963 08:23:17 -- accel/accel.sh@20 -- # IFS=: 00:26:44.963 08:23:17 -- accel/accel.sh@20 -- # read -r var val 00:26:44.963 08:23:17 -- accel/accel.sh@21 -- # val= 00:26:44.963 08:23:17 -- accel/accel.sh@22 -- # case "$var" in 00:26:44.963 08:23:17 -- accel/accel.sh@20 -- # IFS=: 00:26:44.963 08:23:17 -- accel/accel.sh@20 -- # read -r var val 00:26:44.963 08:23:17 -- accel/accel.sh@21 -- # val= 00:26:44.963 08:23:17 -- accel/accel.sh@22 -- # case "$var" in 00:26:44.963 08:23:17 -- accel/accel.sh@20 -- # IFS=: 00:26:44.963 08:23:17 -- accel/accel.sh@20 -- # read -r var val 00:26:44.963 08:23:17 -- accel/accel.sh@21 -- # val= 00:26:44.963 08:23:17 -- accel/accel.sh@22 -- # case "$var" in 00:26:44.963 08:23:17 -- accel/accel.sh@20 -- # IFS=: 00:26:44.963 08:23:17 -- accel/accel.sh@20 -- # read -r var val 00:26:44.963 08:23:17 -- accel/accel.sh@21 -- # val= 00:26:44.963 08:23:17 -- accel/accel.sh@22 -- # case "$var" in 00:26:44.963 08:23:17 -- accel/accel.sh@20 -- # IFS=: 00:26:44.963 08:23:17 -- accel/accel.sh@20 -- # read -r var val 00:26:44.963 08:23:17 -- accel/accel.sh@21 -- # val= 00:26:44.963 08:23:17 -- accel/accel.sh@22 -- # case "$var" in 00:26:44.963 08:23:17 -- accel/accel.sh@20 -- # IFS=: 00:26:44.963 08:23:17 -- accel/accel.sh@20 -- # read -r var val 00:26:44.963 08:23:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:26:44.963 08:23:17 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:26:44.963 08:23:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:44.963 00:26:44.963 real 0m2.971s 00:26:44.963 user 0m2.577s 00:26:44.963 sys 0m0.198s 00:26:44.963 08:23:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:44.963 08:23:17 -- common/autotest_common.sh@10 -- # set +x 00:26:44.963 ************************************ 00:26:44.963 END TEST accel_copy 00:26:44.963 ************************************ 00:26:44.963 08:23:17 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:26:44.963 08:23:17 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:26:44.963 08:23:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:44.963 08:23:17 -- common/autotest_common.sh@10 -- # set +x 00:26:44.963 ************************************ 00:26:44.963 START TEST accel_fill 00:26:44.963 ************************************ 00:26:44.963 08:23:17 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:26:44.963 08:23:17 -- accel/accel.sh@16 -- # local accel_opc 00:26:44.963 08:23:17 -- accel/accel.sh@17 -- # local accel_module 00:26:44.963 08:23:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:26:44.963 08:23:17 -- accel/accel.sh@12 -- # build_accel_config 00:26:44.963 08:23:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:26:44.963 08:23:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:26:44.963 08:23:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:26:44.963 08:23:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:26:44.963 08:23:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:26:44.963 08:23:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:26:44.963 08:23:17 -- accel/accel.sh@41 -- # local IFS=, 00:26:44.963 08:23:17 -- accel/accel.sh@42 -- # jq -r . 00:26:44.963 [2024-04-17 08:23:17.963490] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:44.963 [2024-04-17 08:23:17.963585] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58740 ] 00:26:44.963 [2024-04-17 08:23:18.102193] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.963 [2024-04-17 08:23:18.208129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:46.337 08:23:19 -- accel/accel.sh@18 -- # out=' 00:26:46.337 SPDK Configuration: 00:26:46.337 Core mask: 0x1 00:26:46.337 00:26:46.337 Accel Perf Configuration: 00:26:46.337 Workload Type: fill 00:26:46.337 Fill pattern: 0x80 00:26:46.337 Transfer size: 4096 bytes 00:26:46.337 Vector count 1 00:26:46.337 Module: software 00:26:46.337 Queue depth: 64 00:26:46.337 Allocate depth: 64 00:26:46.337 # threads/core: 1 00:26:46.337 Run time: 1 seconds 00:26:46.337 Verify: Yes 00:26:46.337 00:26:46.337 Running for 1 seconds... 00:26:46.337 00:26:46.337 Core,Thread Transfers Bandwidth Failed Miscompares 00:26:46.337 ------------------------------------------------------------------------------------ 00:26:46.337 0,0 533184/s 2082 MiB/s 0 0 00:26:46.337 ==================================================================================== 00:26:46.337 Total 533184/s 2082 MiB/s 0 0' 00:26:46.337 08:23:19 -- accel/accel.sh@20 -- # IFS=: 00:26:46.337 08:23:19 -- accel/accel.sh@20 -- # read -r var val 00:26:46.337 08:23:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:26:46.337 08:23:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:26:46.337 08:23:19 -- accel/accel.sh@12 -- # build_accel_config 00:26:46.337 08:23:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:26:46.337 08:23:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:26:46.337 08:23:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:26:46.337 08:23:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:26:46.337 08:23:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:26:46.337 08:23:19 -- accel/accel.sh@41 -- # local IFS=, 00:26:46.337 08:23:19 -- accel/accel.sh@42 -- # jq -r . 00:26:46.337 [2024-04-17 08:23:19.440384] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:46.337 [2024-04-17 08:23:19.440480] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58764 ] 00:26:46.337 [2024-04-17 08:23:19.578277] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.600 [2024-04-17 08:23:19.684507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:46.601 08:23:19 -- accel/accel.sh@21 -- # val= 00:26:46.601 08:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:26:46.601 08:23:19 -- accel/accel.sh@20 -- # IFS=: 00:26:46.601 08:23:19 -- accel/accel.sh@20 -- # read -r var val 00:26:46.601 08:23:19 -- accel/accel.sh@21 -- # val= 00:26:46.601 08:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:26:46.601 08:23:19 -- accel/accel.sh@20 -- # IFS=: 00:26:46.601 08:23:19 -- accel/accel.sh@20 -- # read -r var val 00:26:46.601 08:23:19 -- accel/accel.sh@21 -- # val=0x1 00:26:46.601 08:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:26:46.601 08:23:19 -- accel/accel.sh@20 -- # IFS=: 00:26:46.601 08:23:19 -- accel/accel.sh@20 -- # read -r var val 00:26:46.601 08:23:19 -- accel/accel.sh@21 -- # val= 00:26:46.601 08:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:26:46.601 08:23:19 -- accel/accel.sh@20 -- # IFS=: 00:26:46.601 08:23:19 -- accel/accel.sh@20 -- # read -r var val 00:26:46.601 08:23:19 -- accel/accel.sh@21 -- # val= 00:26:46.601 08:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:26:46.601 08:23:19 -- accel/accel.sh@20 -- # IFS=: 00:26:46.601 08:23:19 -- accel/accel.sh@20 -- # read -r var val 00:26:46.601 08:23:19 -- accel/accel.sh@21 -- # val=fill 00:26:46.601 08:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:26:46.601 08:23:19 -- accel/accel.sh@24 -- # accel_opc=fill 00:26:46.601 08:23:19 -- accel/accel.sh@20 -- # IFS=: 00:26:46.601 08:23:19 -- accel/accel.sh@20 -- # read -r var val 00:26:46.601 08:23:19 -- accel/accel.sh@21 -- # val=0x80 00:26:46.601 08:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:26:46.601 08:23:19 -- accel/accel.sh@20 -- # IFS=: 00:26:46.601 08:23:19 -- accel/accel.sh@20 -- # read -r var val 00:26:46.601 08:23:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:26:46.601 08:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:26:46.601 08:23:19 -- accel/accel.sh@20 -- # IFS=: 00:26:46.601 08:23:19 -- accel/accel.sh@20 -- # read -r var val 00:26:46.601 08:23:19 -- accel/accel.sh@21 -- # val= 00:26:46.601 08:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:26:46.601 08:23:19 -- accel/accel.sh@20 -- # IFS=: 00:26:46.601 08:23:19 -- accel/accel.sh@20 -- # read -r var val 00:26:46.601 08:23:19 -- accel/accel.sh@21 -- # val=software 00:26:46.601 08:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:26:46.601 08:23:19 -- accel/accel.sh@23 -- # accel_module=software 00:26:46.601 08:23:19 -- accel/accel.sh@20 -- # IFS=: 00:26:46.601 08:23:19 -- accel/accel.sh@20 -- # read -r var val 00:26:46.601 08:23:19 -- accel/accel.sh@21 -- # val=64 00:26:46.601 08:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:26:46.601 08:23:19 -- accel/accel.sh@20 -- # IFS=: 00:26:46.601 08:23:19 -- accel/accel.sh@20 -- # read -r var val 00:26:46.601 08:23:19 -- accel/accel.sh@21 -- # val=64 00:26:46.601 08:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:26:46.601 08:23:19 -- accel/accel.sh@20 -- # IFS=: 00:26:46.601 08:23:19 -- accel/accel.sh@20 -- # read -r var val 00:26:46.601 08:23:19 -- accel/accel.sh@21 -- # val=1 00:26:46.601 08:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:26:46.601 08:23:19 -- accel/accel.sh@20 -- # IFS=: 00:26:46.601 08:23:19 -- accel/accel.sh@20 -- # read -r var val 00:26:46.601 08:23:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:26:46.601 08:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:26:46.601 08:23:19 -- accel/accel.sh@20 -- # IFS=: 00:26:46.601 08:23:19 -- accel/accel.sh@20 -- # read -r var val 00:26:46.601 08:23:19 -- accel/accel.sh@21 -- # val=Yes 00:26:46.601 08:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:26:46.601 08:23:19 -- accel/accel.sh@20 -- # IFS=: 00:26:46.601 08:23:19 -- accel/accel.sh@20 -- # read -r var val 00:26:46.601 08:23:19 -- accel/accel.sh@21 -- # val= 00:26:46.601 08:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:26:46.601 08:23:19 -- accel/accel.sh@20 -- # IFS=: 00:26:46.601 08:23:19 -- accel/accel.sh@20 -- # read -r var val 00:26:46.601 08:23:19 -- accel/accel.sh@21 -- # val= 00:26:46.601 08:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:26:46.601 08:23:19 -- accel/accel.sh@20 -- # IFS=: 00:26:46.601 08:23:19 -- accel/accel.sh@20 -- # read -r var val 00:26:47.994 08:23:20 -- accel/accel.sh@21 -- # val= 00:26:47.994 08:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:26:47.994 08:23:20 -- accel/accel.sh@20 -- # IFS=: 00:26:47.994 08:23:20 -- accel/accel.sh@20 -- # read -r var val 00:26:47.994 08:23:20 -- accel/accel.sh@21 -- # val= 00:26:47.994 08:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:26:47.994 08:23:20 -- accel/accel.sh@20 -- # IFS=: 00:26:47.994 08:23:20 -- accel/accel.sh@20 -- # read -r var val 00:26:47.994 08:23:20 -- accel/accel.sh@21 -- # val= 00:26:47.994 08:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:26:47.994 08:23:20 -- accel/accel.sh@20 -- # IFS=: 00:26:47.994 08:23:20 -- accel/accel.sh@20 -- # read -r var val 00:26:47.994 08:23:20 -- accel/accel.sh@21 -- # val= 00:26:47.994 08:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:26:47.994 08:23:20 -- accel/accel.sh@20 -- # IFS=: 00:26:47.994 08:23:20 -- accel/accel.sh@20 -- # read -r var val 00:26:47.994 08:23:20 -- accel/accel.sh@21 -- # val= 00:26:47.994 08:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:26:47.994 08:23:20 -- accel/accel.sh@20 -- # IFS=: 00:26:47.994 08:23:20 -- accel/accel.sh@20 -- # read -r var val 00:26:47.994 08:23:20 -- accel/accel.sh@21 -- # val= 00:26:47.994 08:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:26:47.994 08:23:20 -- accel/accel.sh@20 -- # IFS=: 00:26:47.994 08:23:20 -- accel/accel.sh@20 -- # read -r var val 00:26:47.994 08:23:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:26:47.994 08:23:20 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:26:47.994 08:23:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:47.994 00:26:47.994 real 0m2.969s 00:26:47.994 user 0m2.586s 00:26:47.994 sys 0m0.185s 00:26:47.994 08:23:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:47.994 08:23:20 -- common/autotest_common.sh@10 -- # set +x 00:26:47.994 ************************************ 00:26:47.994 END TEST accel_fill 00:26:47.995 ************************************ 00:26:47.995 08:23:20 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:26:47.995 08:23:20 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:26:47.995 08:23:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:47.995 08:23:20 -- common/autotest_common.sh@10 -- # set +x 00:26:47.995 ************************************ 00:26:47.995 START TEST accel_copy_crc32c 00:26:47.995 ************************************ 00:26:47.995 08:23:20 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:26:47.995 08:23:20 -- accel/accel.sh@16 -- # local accel_opc 00:26:47.995 08:23:20 -- accel/accel.sh@17 -- # local accel_module 00:26:47.995 08:23:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:26:47.995 08:23:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:26:47.995 08:23:20 -- accel/accel.sh@12 -- # build_accel_config 00:26:47.995 08:23:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:26:47.995 08:23:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:26:47.995 08:23:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:26:47.995 08:23:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:26:47.995 08:23:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:26:47.995 08:23:20 -- accel/accel.sh@41 -- # local IFS=, 00:26:47.995 08:23:20 -- accel/accel.sh@42 -- # jq -r . 00:26:47.995 [2024-04-17 08:23:20.999051] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:47.995 [2024-04-17 08:23:20.999249] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58794 ] 00:26:47.995 [2024-04-17 08:23:21.139136] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.995 [2024-04-17 08:23:21.244811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.418 08:23:22 -- accel/accel.sh@18 -- # out=' 00:26:49.418 SPDK Configuration: 00:26:49.418 Core mask: 0x1 00:26:49.418 00:26:49.418 Accel Perf Configuration: 00:26:49.418 Workload Type: copy_crc32c 00:26:49.418 CRC-32C seed: 0 00:26:49.418 Vector size: 4096 bytes 00:26:49.418 Transfer size: 4096 bytes 00:26:49.418 Vector count 1 00:26:49.418 Module: software 00:26:49.418 Queue depth: 32 00:26:49.418 Allocate depth: 32 00:26:49.418 # threads/core: 1 00:26:49.418 Run time: 1 seconds 00:26:49.418 Verify: Yes 00:26:49.418 00:26:49.418 Running for 1 seconds... 00:26:49.418 00:26:49.418 Core,Thread Transfers Bandwidth Failed Miscompares 00:26:49.418 ------------------------------------------------------------------------------------ 00:26:49.418 0,0 277280/s 1083 MiB/s 0 0 00:26:49.418 ==================================================================================== 00:26:49.418 Total 277280/s 1083 MiB/s 0 0' 00:26:49.418 08:23:22 -- accel/accel.sh@20 -- # IFS=: 00:26:49.418 08:23:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:26:49.418 08:23:22 -- accel/accel.sh@20 -- # read -r var val 00:26:49.418 08:23:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:26:49.418 08:23:22 -- accel/accel.sh@12 -- # build_accel_config 00:26:49.418 08:23:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:26:49.418 08:23:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:26:49.418 08:23:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:26:49.418 08:23:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:26:49.418 08:23:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:26:49.419 08:23:22 -- accel/accel.sh@41 -- # local IFS=, 00:26:49.419 08:23:22 -- accel/accel.sh@42 -- # jq -r . 00:26:49.419 [2024-04-17 08:23:22.488230] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:49.419 [2024-04-17 08:23:22.488432] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58814 ] 00:26:49.419 [2024-04-17 08:23:22.630055] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.419 [2024-04-17 08:23:22.733517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.677 08:23:22 -- accel/accel.sh@21 -- # val= 00:26:49.677 08:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:26:49.677 08:23:22 -- accel/accel.sh@20 -- # IFS=: 00:26:49.677 08:23:22 -- accel/accel.sh@20 -- # read -r var val 00:26:49.677 08:23:22 -- accel/accel.sh@21 -- # val= 00:26:49.677 08:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:26:49.677 08:23:22 -- accel/accel.sh@20 -- # IFS=: 00:26:49.677 08:23:22 -- accel/accel.sh@20 -- # read -r var val 00:26:49.677 08:23:22 -- accel/accel.sh@21 -- # val=0x1 00:26:49.677 08:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:26:49.677 08:23:22 -- accel/accel.sh@20 -- # IFS=: 00:26:49.677 08:23:22 -- accel/accel.sh@20 -- # read -r var val 00:26:49.677 08:23:22 -- accel/accel.sh@21 -- # val= 00:26:49.677 08:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:26:49.677 08:23:22 -- accel/accel.sh@20 -- # IFS=: 00:26:49.677 08:23:22 -- accel/accel.sh@20 -- # read -r var val 00:26:49.677 08:23:22 -- accel/accel.sh@21 -- # val= 00:26:49.677 08:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:26:49.677 08:23:22 -- accel/accel.sh@20 -- # IFS=: 00:26:49.677 08:23:22 -- accel/accel.sh@20 -- # read -r var val 00:26:49.677 08:23:22 -- accel/accel.sh@21 -- # val=copy_crc32c 00:26:49.677 08:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:26:49.677 08:23:22 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:26:49.677 08:23:22 -- accel/accel.sh@20 -- # IFS=: 00:26:49.677 08:23:22 -- accel/accel.sh@20 -- # read -r var val 00:26:49.677 08:23:22 -- accel/accel.sh@21 -- # val=0 00:26:49.677 08:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:26:49.677 08:23:22 -- accel/accel.sh@20 -- # IFS=: 00:26:49.677 08:23:22 -- accel/accel.sh@20 -- # read -r var val 00:26:49.677 08:23:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:26:49.677 08:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:26:49.677 08:23:22 -- accel/accel.sh@20 -- # IFS=: 00:26:49.677 08:23:22 -- accel/accel.sh@20 -- # read -r var val 00:26:49.677 08:23:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:26:49.677 08:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:26:49.677 08:23:22 -- accel/accel.sh@20 -- # IFS=: 00:26:49.677 08:23:22 -- accel/accel.sh@20 -- # read -r var val 00:26:49.677 08:23:22 -- accel/accel.sh@21 -- # val= 00:26:49.677 08:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:26:49.677 08:23:22 -- accel/accel.sh@20 -- # IFS=: 00:26:49.677 08:23:22 -- accel/accel.sh@20 -- # read -r var val 00:26:49.677 08:23:22 -- accel/accel.sh@21 -- # val=software 00:26:49.677 08:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:26:49.677 08:23:22 -- accel/accel.sh@23 -- # accel_module=software 00:26:49.677 08:23:22 -- accel/accel.sh@20 -- # IFS=: 00:26:49.677 08:23:22 -- accel/accel.sh@20 -- # read -r var val 00:26:49.677 08:23:22 -- accel/accel.sh@21 -- # val=32 00:26:49.677 08:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:26:49.677 08:23:22 -- accel/accel.sh@20 -- # IFS=: 00:26:49.677 08:23:22 -- accel/accel.sh@20 -- # read -r var val 00:26:49.677 08:23:22 -- accel/accel.sh@21 -- # val=32 00:26:49.677 08:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:26:49.677 08:23:22 -- accel/accel.sh@20 -- # IFS=: 00:26:49.677 08:23:22 -- accel/accel.sh@20 -- # read -r var val 00:26:49.677 08:23:22 -- accel/accel.sh@21 -- # val=1 00:26:49.677 08:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:26:49.677 08:23:22 -- accel/accel.sh@20 -- # IFS=: 00:26:49.677 08:23:22 -- accel/accel.sh@20 -- # read -r var val 00:26:49.677 08:23:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:26:49.677 08:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:26:49.677 08:23:22 -- accel/accel.sh@20 -- # IFS=: 00:26:49.677 08:23:22 -- accel/accel.sh@20 -- # read -r var val 00:26:49.677 08:23:22 -- accel/accel.sh@21 -- # val=Yes 00:26:49.677 08:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:26:49.677 08:23:22 -- accel/accel.sh@20 -- # IFS=: 00:26:49.677 08:23:22 -- accel/accel.sh@20 -- # read -r var val 00:26:49.677 08:23:22 -- accel/accel.sh@21 -- # val= 00:26:49.677 08:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:26:49.677 08:23:22 -- accel/accel.sh@20 -- # IFS=: 00:26:49.677 08:23:22 -- accel/accel.sh@20 -- # read -r var val 00:26:49.677 08:23:22 -- accel/accel.sh@21 -- # val= 00:26:49.677 08:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:26:49.677 08:23:22 -- accel/accel.sh@20 -- # IFS=: 00:26:49.677 08:23:22 -- accel/accel.sh@20 -- # read -r var val 00:26:50.662 08:23:23 -- accel/accel.sh@21 -- # val= 00:26:50.662 08:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:26:50.662 08:23:23 -- accel/accel.sh@20 -- # IFS=: 00:26:50.662 08:23:23 -- accel/accel.sh@20 -- # read -r var val 00:26:50.662 08:23:23 -- accel/accel.sh@21 -- # val= 00:26:50.662 08:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:26:50.662 08:23:23 -- accel/accel.sh@20 -- # IFS=: 00:26:50.662 08:23:23 -- accel/accel.sh@20 -- # read -r var val 00:26:50.662 08:23:23 -- accel/accel.sh@21 -- # val= 00:26:50.663 08:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:26:50.663 08:23:23 -- accel/accel.sh@20 -- # IFS=: 00:26:50.663 08:23:23 -- accel/accel.sh@20 -- # read -r var val 00:26:50.663 08:23:23 -- accel/accel.sh@21 -- # val= 00:26:50.663 08:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:26:50.663 08:23:23 -- accel/accel.sh@20 -- # IFS=: 00:26:50.663 08:23:23 -- accel/accel.sh@20 -- # read -r var val 00:26:50.663 08:23:23 -- accel/accel.sh@21 -- # val= 00:26:50.663 08:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:26:50.663 08:23:23 -- accel/accel.sh@20 -- # IFS=: 00:26:50.663 08:23:23 -- accel/accel.sh@20 -- # read -r var val 00:26:50.663 08:23:23 -- accel/accel.sh@21 -- # val= 00:26:50.663 08:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:26:50.663 08:23:23 -- accel/accel.sh@20 -- # IFS=: 00:26:50.663 08:23:23 -- accel/accel.sh@20 -- # read -r var val 00:26:50.663 08:23:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:26:50.663 08:23:23 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:26:50.663 08:23:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:50.663 00:26:50.663 real 0m2.988s 00:26:50.663 user 0m2.584s 00:26:50.663 sys 0m0.208s 00:26:50.663 08:23:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:50.663 08:23:23 -- common/autotest_common.sh@10 -- # set +x 00:26:50.663 ************************************ 00:26:50.663 END TEST accel_copy_crc32c 00:26:50.663 ************************************ 00:26:50.922 08:23:24 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:26:50.922 08:23:24 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:26:50.922 08:23:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:50.922 08:23:24 -- common/autotest_common.sh@10 -- # set +x 00:26:50.922 ************************************ 00:26:50.922 START TEST accel_copy_crc32c_C2 00:26:50.922 ************************************ 00:26:50.922 08:23:24 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:26:50.922 08:23:24 -- accel/accel.sh@16 -- # local accel_opc 00:26:50.922 08:23:24 -- accel/accel.sh@17 -- # local accel_module 00:26:50.922 08:23:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:26:50.922 08:23:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:26:50.922 08:23:24 -- accel/accel.sh@12 -- # build_accel_config 00:26:50.922 08:23:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:26:50.922 08:23:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:26:50.922 08:23:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:26:50.922 08:23:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:26:50.922 08:23:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:26:50.922 08:23:24 -- accel/accel.sh@41 -- # local IFS=, 00:26:50.922 08:23:24 -- accel/accel.sh@42 -- # jq -r . 00:26:50.922 [2024-04-17 08:23:24.050271] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:50.922 [2024-04-17 08:23:24.050447] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58848 ] 00:26:50.922 [2024-04-17 08:23:24.190836] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.180 [2024-04-17 08:23:24.293528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:52.560 08:23:25 -- accel/accel.sh@18 -- # out=' 00:26:52.560 SPDK Configuration: 00:26:52.560 Core mask: 0x1 00:26:52.560 00:26:52.560 Accel Perf Configuration: 00:26:52.560 Workload Type: copy_crc32c 00:26:52.560 CRC-32C seed: 0 00:26:52.560 Vector size: 4096 bytes 00:26:52.560 Transfer size: 8192 bytes 00:26:52.560 Vector count 2 00:26:52.560 Module: software 00:26:52.560 Queue depth: 32 00:26:52.560 Allocate depth: 32 00:26:52.560 # threads/core: 1 00:26:52.560 Run time: 1 seconds 00:26:52.560 Verify: Yes 00:26:52.560 00:26:52.560 Running for 1 seconds... 00:26:52.560 00:26:52.560 Core,Thread Transfers Bandwidth Failed Miscompares 00:26:52.560 ------------------------------------------------------------------------------------ 00:26:52.560 0,0 193824/s 1514 MiB/s 0 0 00:26:52.560 ==================================================================================== 00:26:52.560 Total 193824/s 757 MiB/s 0 0' 00:26:52.560 08:23:25 -- accel/accel.sh@20 -- # IFS=: 00:26:52.560 08:23:25 -- accel/accel.sh@20 -- # read -r var val 00:26:52.560 08:23:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:26:52.560 08:23:25 -- accel/accel.sh@12 -- # build_accel_config 00:26:52.560 08:23:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:26:52.560 08:23:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:26:52.560 08:23:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:26:52.560 08:23:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:26:52.560 08:23:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:26:52.560 08:23:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:26:52.560 08:23:25 -- accel/accel.sh@41 -- # local IFS=, 00:26:52.560 08:23:25 -- accel/accel.sh@42 -- # jq -r . 00:26:52.560 [2024-04-17 08:23:25.540741] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:52.560 [2024-04-17 08:23:25.540833] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58868 ] 00:26:52.560 [2024-04-17 08:23:25.681693] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.560 [2024-04-17 08:23:25.782529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:52.560 08:23:25 -- accel/accel.sh@21 -- # val= 00:26:52.560 08:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:26:52.560 08:23:25 -- accel/accel.sh@20 -- # IFS=: 00:26:52.560 08:23:25 -- accel/accel.sh@20 -- # read -r var val 00:26:52.560 08:23:25 -- accel/accel.sh@21 -- # val= 00:26:52.560 08:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:26:52.560 08:23:25 -- accel/accel.sh@20 -- # IFS=: 00:26:52.560 08:23:25 -- accel/accel.sh@20 -- # read -r var val 00:26:52.560 08:23:25 -- accel/accel.sh@21 -- # val=0x1 00:26:52.560 08:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:26:52.560 08:23:25 -- accel/accel.sh@20 -- # IFS=: 00:26:52.560 08:23:25 -- accel/accel.sh@20 -- # read -r var val 00:26:52.560 08:23:25 -- accel/accel.sh@21 -- # val= 00:26:52.560 08:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:26:52.560 08:23:25 -- accel/accel.sh@20 -- # IFS=: 00:26:52.560 08:23:25 -- accel/accel.sh@20 -- # read -r var val 00:26:52.560 08:23:25 -- accel/accel.sh@21 -- # val= 00:26:52.560 08:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:26:52.560 08:23:25 -- accel/accel.sh@20 -- # IFS=: 00:26:52.560 08:23:25 -- accel/accel.sh@20 -- # read -r var val 00:26:52.560 08:23:25 -- accel/accel.sh@21 -- # val=copy_crc32c 00:26:52.560 08:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:26:52.560 08:23:25 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:26:52.560 08:23:25 -- accel/accel.sh@20 -- # IFS=: 00:26:52.560 08:23:25 -- accel/accel.sh@20 -- # read -r var val 00:26:52.560 08:23:25 -- accel/accel.sh@21 -- # val=0 00:26:52.560 08:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:26:52.560 08:23:25 -- accel/accel.sh@20 -- # IFS=: 00:26:52.560 08:23:25 -- accel/accel.sh@20 -- # read -r var val 00:26:52.560 08:23:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:26:52.560 08:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:26:52.560 08:23:25 -- accel/accel.sh@20 -- # IFS=: 00:26:52.560 08:23:25 -- accel/accel.sh@20 -- # read -r var val 00:26:52.560 08:23:25 -- accel/accel.sh@21 -- # val='8192 bytes' 00:26:52.560 08:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:26:52.560 08:23:25 -- accel/accel.sh@20 -- # IFS=: 00:26:52.560 08:23:25 -- accel/accel.sh@20 -- # read -r var val 00:26:52.560 08:23:25 -- accel/accel.sh@21 -- # val= 00:26:52.560 08:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:26:52.560 08:23:25 -- accel/accel.sh@20 -- # IFS=: 00:26:52.560 08:23:25 -- accel/accel.sh@20 -- # read -r var val 00:26:52.560 08:23:25 -- accel/accel.sh@21 -- # val=software 00:26:52.560 08:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:26:52.560 08:23:25 -- accel/accel.sh@23 -- # accel_module=software 00:26:52.560 08:23:25 -- accel/accel.sh@20 -- # IFS=: 00:26:52.560 08:23:25 -- accel/accel.sh@20 -- # read -r var val 00:26:52.560 08:23:25 -- accel/accel.sh@21 -- # val=32 00:26:52.560 08:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:26:52.561 08:23:25 -- accel/accel.sh@20 -- # IFS=: 00:26:52.561 08:23:25 -- accel/accel.sh@20 -- # read -r var val 00:26:52.561 08:23:25 -- accel/accel.sh@21 -- # val=32 00:26:52.561 08:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:26:52.561 08:23:25 -- accel/accel.sh@20 -- # IFS=: 00:26:52.561 08:23:25 -- accel/accel.sh@20 -- # read -r var val 00:26:52.561 08:23:25 -- accel/accel.sh@21 -- # val=1 00:26:52.561 08:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:26:52.561 08:23:25 -- accel/accel.sh@20 -- # IFS=: 00:26:52.561 08:23:25 -- accel/accel.sh@20 -- # read -r var val 00:26:52.561 08:23:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:26:52.561 08:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:26:52.561 08:23:25 -- accel/accel.sh@20 -- # IFS=: 00:26:52.561 08:23:25 -- accel/accel.sh@20 -- # read -r var val 00:26:52.561 08:23:25 -- accel/accel.sh@21 -- # val=Yes 00:26:52.561 08:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:26:52.561 08:23:25 -- accel/accel.sh@20 -- # IFS=: 00:26:52.561 08:23:25 -- accel/accel.sh@20 -- # read -r var val 00:26:52.561 08:23:25 -- accel/accel.sh@21 -- # val= 00:26:52.561 08:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:26:52.561 08:23:25 -- accel/accel.sh@20 -- # IFS=: 00:26:52.561 08:23:25 -- accel/accel.sh@20 -- # read -r var val 00:26:52.561 08:23:25 -- accel/accel.sh@21 -- # val= 00:26:52.561 08:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:26:52.561 08:23:25 -- accel/accel.sh@20 -- # IFS=: 00:26:52.561 08:23:25 -- accel/accel.sh@20 -- # read -r var val 00:26:53.936 08:23:26 -- accel/accel.sh@21 -- # val= 00:26:53.936 08:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:26:53.936 08:23:26 -- accel/accel.sh@20 -- # IFS=: 00:26:53.936 08:23:26 -- accel/accel.sh@20 -- # read -r var val 00:26:53.936 08:23:26 -- accel/accel.sh@21 -- # val= 00:26:53.936 08:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:26:53.936 08:23:26 -- accel/accel.sh@20 -- # IFS=: 00:26:53.936 08:23:26 -- accel/accel.sh@20 -- # read -r var val 00:26:53.936 08:23:26 -- accel/accel.sh@21 -- # val= 00:26:53.936 08:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:26:53.936 08:23:26 -- accel/accel.sh@20 -- # IFS=: 00:26:53.936 08:23:26 -- accel/accel.sh@20 -- # read -r var val 00:26:53.936 08:23:26 -- accel/accel.sh@21 -- # val= 00:26:53.936 08:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:26:53.936 08:23:26 -- accel/accel.sh@20 -- # IFS=: 00:26:53.936 08:23:26 -- accel/accel.sh@20 -- # read -r var val 00:26:53.936 08:23:26 -- accel/accel.sh@21 -- # val= 00:26:53.936 08:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:26:53.936 08:23:26 -- accel/accel.sh@20 -- # IFS=: 00:26:53.936 08:23:26 -- accel/accel.sh@20 -- # read -r var val 00:26:53.936 08:23:26 -- accel/accel.sh@21 -- # val= 00:26:53.936 08:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:26:53.936 08:23:26 -- accel/accel.sh@20 -- # IFS=: 00:26:53.936 08:23:26 -- accel/accel.sh@20 -- # read -r var val 00:26:53.936 08:23:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:26:53.936 08:23:26 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:26:53.936 08:23:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:53.936 ************************************ 00:26:53.936 END TEST accel_copy_crc32c_C2 00:26:53.936 ************************************ 00:26:53.936 00:26:53.936 real 0m2.981s 00:26:53.936 user 0m2.590s 00:26:53.936 sys 0m0.192s 00:26:53.936 08:23:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:53.936 08:23:26 -- common/autotest_common.sh@10 -- # set +x 00:26:53.936 08:23:27 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:26:53.936 08:23:27 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:26:53.936 08:23:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:53.936 08:23:27 -- common/autotest_common.sh@10 -- # set +x 00:26:53.936 ************************************ 00:26:53.936 START TEST accel_dualcast 00:26:53.936 ************************************ 00:26:53.936 08:23:27 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:26:53.936 08:23:27 -- accel/accel.sh@16 -- # local accel_opc 00:26:53.936 08:23:27 -- accel/accel.sh@17 -- # local accel_module 00:26:53.936 08:23:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:26:53.936 08:23:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:26:53.936 08:23:27 -- accel/accel.sh@12 -- # build_accel_config 00:26:53.936 08:23:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:26:53.936 08:23:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:26:53.936 08:23:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:26:53.936 08:23:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:26:53.936 08:23:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:26:53.936 08:23:27 -- accel/accel.sh@41 -- # local IFS=, 00:26:53.936 08:23:27 -- accel/accel.sh@42 -- # jq -r . 00:26:53.936 [2024-04-17 08:23:27.094442] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:53.936 [2024-04-17 08:23:27.094638] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58902 ] 00:26:53.936 [2024-04-17 08:23:27.234160] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.195 [2024-04-17 08:23:27.339748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:55.575 08:23:28 -- accel/accel.sh@18 -- # out=' 00:26:55.575 SPDK Configuration: 00:26:55.575 Core mask: 0x1 00:26:55.575 00:26:55.575 Accel Perf Configuration: 00:26:55.575 Workload Type: dualcast 00:26:55.575 Transfer size: 4096 bytes 00:26:55.575 Vector count 1 00:26:55.575 Module: software 00:26:55.575 Queue depth: 32 00:26:55.575 Allocate depth: 32 00:26:55.575 # threads/core: 1 00:26:55.575 Run time: 1 seconds 00:26:55.575 Verify: Yes 00:26:55.575 00:26:55.575 Running for 1 seconds... 00:26:55.575 00:26:55.575 Core,Thread Transfers Bandwidth Failed Miscompares 00:26:55.575 ------------------------------------------------------------------------------------ 00:26:55.575 0,0 397760/s 1553 MiB/s 0 0 00:26:55.575 ==================================================================================== 00:26:55.575 Total 397760/s 1553 MiB/s 0 0' 00:26:55.575 08:23:28 -- accel/accel.sh@20 -- # IFS=: 00:26:55.575 08:23:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:26:55.575 08:23:28 -- accel/accel.sh@20 -- # read -r var val 00:26:55.575 08:23:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:26:55.575 08:23:28 -- accel/accel.sh@12 -- # build_accel_config 00:26:55.575 08:23:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:26:55.575 08:23:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:26:55.575 08:23:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:26:55.575 08:23:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:26:55.575 08:23:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:26:55.575 08:23:28 -- accel/accel.sh@41 -- # local IFS=, 00:26:55.575 08:23:28 -- accel/accel.sh@42 -- # jq -r . 00:26:55.575 [2024-04-17 08:23:28.588936] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:55.575 [2024-04-17 08:23:28.589044] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58922 ] 00:26:55.575 [2024-04-17 08:23:28.725779] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.575 [2024-04-17 08:23:28.833338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:55.575 08:23:28 -- accel/accel.sh@21 -- # val= 00:26:55.575 08:23:28 -- accel/accel.sh@22 -- # case "$var" in 00:26:55.575 08:23:28 -- accel/accel.sh@20 -- # IFS=: 00:26:55.575 08:23:28 -- accel/accel.sh@20 -- # read -r var val 00:26:55.575 08:23:28 -- accel/accel.sh@21 -- # val= 00:26:55.575 08:23:28 -- accel/accel.sh@22 -- # case "$var" in 00:26:55.575 08:23:28 -- accel/accel.sh@20 -- # IFS=: 00:26:55.575 08:23:28 -- accel/accel.sh@20 -- # read -r var val 00:26:55.575 08:23:28 -- accel/accel.sh@21 -- # val=0x1 00:26:55.575 08:23:28 -- accel/accel.sh@22 -- # case "$var" in 00:26:55.575 08:23:28 -- accel/accel.sh@20 -- # IFS=: 00:26:55.575 08:23:28 -- accel/accel.sh@20 -- # read -r var val 00:26:55.575 08:23:28 -- accel/accel.sh@21 -- # val= 00:26:55.575 08:23:28 -- accel/accel.sh@22 -- # case "$var" in 00:26:55.575 08:23:28 -- accel/accel.sh@20 -- # IFS=: 00:26:55.575 08:23:28 -- accel/accel.sh@20 -- # read -r var val 00:26:55.575 08:23:28 -- accel/accel.sh@21 -- # val= 00:26:55.575 08:23:28 -- accel/accel.sh@22 -- # case "$var" in 00:26:55.575 08:23:28 -- accel/accel.sh@20 -- # IFS=: 00:26:55.575 08:23:28 -- accel/accel.sh@20 -- # read -r var val 00:26:55.575 08:23:28 -- accel/accel.sh@21 -- # val=dualcast 00:26:55.575 08:23:28 -- accel/accel.sh@22 -- # case "$var" in 00:26:55.575 08:23:28 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:26:55.575 08:23:28 -- accel/accel.sh@20 -- # IFS=: 00:26:55.575 08:23:28 -- accel/accel.sh@20 -- # read -r var val 00:26:55.575 08:23:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:26:55.575 08:23:28 -- accel/accel.sh@22 -- # case "$var" in 00:26:55.575 08:23:28 -- accel/accel.sh@20 -- # IFS=: 00:26:55.575 08:23:28 -- accel/accel.sh@20 -- # read -r var val 00:26:55.575 08:23:28 -- accel/accel.sh@21 -- # val= 00:26:55.575 08:23:28 -- accel/accel.sh@22 -- # case "$var" in 00:26:55.575 08:23:28 -- accel/accel.sh@20 -- # IFS=: 00:26:55.575 08:23:28 -- accel/accel.sh@20 -- # read -r var val 00:26:55.575 08:23:28 -- accel/accel.sh@21 -- # val=software 00:26:55.575 08:23:28 -- accel/accel.sh@22 -- # case "$var" in 00:26:55.575 08:23:28 -- accel/accel.sh@23 -- # accel_module=software 00:26:55.575 08:23:28 -- accel/accel.sh@20 -- # IFS=: 00:26:55.575 08:23:28 -- accel/accel.sh@20 -- # read -r var val 00:26:55.575 08:23:28 -- accel/accel.sh@21 -- # val=32 00:26:55.575 08:23:28 -- accel/accel.sh@22 -- # case "$var" in 00:26:55.575 08:23:28 -- accel/accel.sh@20 -- # IFS=: 00:26:55.575 08:23:28 -- accel/accel.sh@20 -- # read -r var val 00:26:55.575 08:23:28 -- accel/accel.sh@21 -- # val=32 00:26:55.575 08:23:28 -- accel/accel.sh@22 -- # case "$var" in 00:26:55.575 08:23:28 -- accel/accel.sh@20 -- # IFS=: 00:26:55.575 08:23:28 -- accel/accel.sh@20 -- # read -r var val 00:26:55.575 08:23:28 -- accel/accel.sh@21 -- # val=1 00:26:55.575 08:23:28 -- accel/accel.sh@22 -- # case "$var" in 00:26:55.575 08:23:28 -- accel/accel.sh@20 -- # IFS=: 00:26:55.575 08:23:28 -- accel/accel.sh@20 -- # read -r var val 00:26:55.575 08:23:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:26:55.575 08:23:28 -- accel/accel.sh@22 -- # case "$var" in 00:26:55.575 08:23:28 -- accel/accel.sh@20 -- # IFS=: 00:26:55.575 08:23:28 -- accel/accel.sh@20 -- # read -r var val 00:26:55.575 08:23:28 -- accel/accel.sh@21 -- # val=Yes 00:26:55.575 08:23:28 -- accel/accel.sh@22 -- # case "$var" in 00:26:55.575 08:23:28 -- accel/accel.sh@20 -- # IFS=: 00:26:55.575 08:23:28 -- accel/accel.sh@20 -- # read -r var val 00:26:55.575 08:23:28 -- accel/accel.sh@21 -- # val= 00:26:55.575 08:23:28 -- accel/accel.sh@22 -- # case "$var" in 00:26:55.575 08:23:28 -- accel/accel.sh@20 -- # IFS=: 00:26:55.575 08:23:28 -- accel/accel.sh@20 -- # read -r var val 00:26:55.575 08:23:28 -- accel/accel.sh@21 -- # val= 00:26:55.575 08:23:28 -- accel/accel.sh@22 -- # case "$var" in 00:26:55.575 08:23:28 -- accel/accel.sh@20 -- # IFS=: 00:26:55.575 08:23:28 -- accel/accel.sh@20 -- # read -r var val 00:26:56.956 08:23:30 -- accel/accel.sh@21 -- # val= 00:26:56.956 08:23:30 -- accel/accel.sh@22 -- # case "$var" in 00:26:56.956 08:23:30 -- accel/accel.sh@20 -- # IFS=: 00:26:56.956 08:23:30 -- accel/accel.sh@20 -- # read -r var val 00:26:56.956 08:23:30 -- accel/accel.sh@21 -- # val= 00:26:56.956 08:23:30 -- accel/accel.sh@22 -- # case "$var" in 00:26:56.956 08:23:30 -- accel/accel.sh@20 -- # IFS=: 00:26:56.956 08:23:30 -- accel/accel.sh@20 -- # read -r var val 00:26:56.956 08:23:30 -- accel/accel.sh@21 -- # val= 00:26:56.956 08:23:30 -- accel/accel.sh@22 -- # case "$var" in 00:26:56.956 08:23:30 -- accel/accel.sh@20 -- # IFS=: 00:26:56.957 08:23:30 -- accel/accel.sh@20 -- # read -r var val 00:26:56.957 08:23:30 -- accel/accel.sh@21 -- # val= 00:26:56.957 08:23:30 -- accel/accel.sh@22 -- # case "$var" in 00:26:56.957 08:23:30 -- accel/accel.sh@20 -- # IFS=: 00:26:56.957 08:23:30 -- accel/accel.sh@20 -- # read -r var val 00:26:56.957 08:23:30 -- accel/accel.sh@21 -- # val= 00:26:56.957 08:23:30 -- accel/accel.sh@22 -- # case "$var" in 00:26:56.957 08:23:30 -- accel/accel.sh@20 -- # IFS=: 00:26:56.957 08:23:30 -- accel/accel.sh@20 -- # read -r var val 00:26:56.957 08:23:30 -- accel/accel.sh@21 -- # val= 00:26:56.957 08:23:30 -- accel/accel.sh@22 -- # case "$var" in 00:26:56.957 08:23:30 -- accel/accel.sh@20 -- # IFS=: 00:26:56.957 08:23:30 -- accel/accel.sh@20 -- # read -r var val 00:26:56.957 08:23:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:26:56.957 08:23:30 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:26:56.957 08:23:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:56.957 00:26:56.957 real 0m2.992s 00:26:56.957 user 0m2.601s 00:26:56.957 sys 0m0.192s 00:26:56.957 08:23:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:56.957 08:23:30 -- common/autotest_common.sh@10 -- # set +x 00:26:56.957 ************************************ 00:26:56.957 END TEST accel_dualcast 00:26:56.957 ************************************ 00:26:56.957 08:23:30 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:26:56.957 08:23:30 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:26:56.957 08:23:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:56.957 08:23:30 -- common/autotest_common.sh@10 -- # set +x 00:26:56.957 ************************************ 00:26:56.957 START TEST accel_compare 00:26:56.957 ************************************ 00:26:56.957 08:23:30 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:26:56.957 08:23:30 -- accel/accel.sh@16 -- # local accel_opc 00:26:56.957 08:23:30 -- accel/accel.sh@17 -- # local accel_module 00:26:56.957 08:23:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:26:56.957 08:23:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:26:56.957 08:23:30 -- accel/accel.sh@12 -- # build_accel_config 00:26:56.957 08:23:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:26:56.957 08:23:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:26:56.957 08:23:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:26:56.957 08:23:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:26:56.957 08:23:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:26:56.957 08:23:30 -- accel/accel.sh@41 -- # local IFS=, 00:26:56.957 08:23:30 -- accel/accel.sh@42 -- # jq -r . 00:26:56.957 [2024-04-17 08:23:30.135299] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:56.957 [2024-04-17 08:23:30.135380] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58956 ] 00:26:56.957 [2024-04-17 08:23:30.274561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.216 [2024-04-17 08:23:30.382937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.593 08:23:31 -- accel/accel.sh@18 -- # out=' 00:26:58.593 SPDK Configuration: 00:26:58.593 Core mask: 0x1 00:26:58.593 00:26:58.593 Accel Perf Configuration: 00:26:58.593 Workload Type: compare 00:26:58.593 Transfer size: 4096 bytes 00:26:58.593 Vector count 1 00:26:58.593 Module: software 00:26:58.593 Queue depth: 32 00:26:58.593 Allocate depth: 32 00:26:58.593 # threads/core: 1 00:26:58.593 Run time: 1 seconds 00:26:58.593 Verify: Yes 00:26:58.593 00:26:58.593 Running for 1 seconds... 00:26:58.593 00:26:58.593 Core,Thread Transfers Bandwidth Failed Miscompares 00:26:58.593 ------------------------------------------------------------------------------------ 00:26:58.593 0,0 494304/s 1930 MiB/s 0 0 00:26:58.593 ==================================================================================== 00:26:58.593 Total 494304/s 1930 MiB/s 0 0' 00:26:58.593 08:23:31 -- accel/accel.sh@20 -- # IFS=: 00:26:58.593 08:23:31 -- accel/accel.sh@20 -- # read -r var val 00:26:58.593 08:23:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:26:58.593 08:23:31 -- accel/accel.sh@12 -- # build_accel_config 00:26:58.593 08:23:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:26:58.593 08:23:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:26:58.593 08:23:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:26:58.593 08:23:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:26:58.593 08:23:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:26:58.593 08:23:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:26:58.593 08:23:31 -- accel/accel.sh@41 -- # local IFS=, 00:26:58.593 08:23:31 -- accel/accel.sh@42 -- # jq -r . 00:26:58.593 [2024-04-17 08:23:31.623177] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:58.593 [2024-04-17 08:23:31.623376] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58976 ] 00:26:58.593 [2024-04-17 08:23:31.764687] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.593 [2024-04-17 08:23:31.869764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.593 08:23:31 -- accel/accel.sh@21 -- # val= 00:26:58.593 08:23:31 -- accel/accel.sh@22 -- # case "$var" in 00:26:58.593 08:23:31 -- accel/accel.sh@20 -- # IFS=: 00:26:58.593 08:23:31 -- accel/accel.sh@20 -- # read -r var val 00:26:58.593 08:23:31 -- accel/accel.sh@21 -- # val= 00:26:58.593 08:23:31 -- accel/accel.sh@22 -- # case "$var" in 00:26:58.593 08:23:31 -- accel/accel.sh@20 -- # IFS=: 00:26:58.593 08:23:31 -- accel/accel.sh@20 -- # read -r var val 00:26:58.593 08:23:31 -- accel/accel.sh@21 -- # val=0x1 00:26:58.593 08:23:31 -- accel/accel.sh@22 -- # case "$var" in 00:26:58.593 08:23:31 -- accel/accel.sh@20 -- # IFS=: 00:26:58.593 08:23:31 -- accel/accel.sh@20 -- # read -r var val 00:26:58.593 08:23:31 -- accel/accel.sh@21 -- # val= 00:26:58.593 08:23:31 -- accel/accel.sh@22 -- # case "$var" in 00:26:58.593 08:23:31 -- accel/accel.sh@20 -- # IFS=: 00:26:58.593 08:23:31 -- accel/accel.sh@20 -- # read -r var val 00:26:58.593 08:23:31 -- accel/accel.sh@21 -- # val= 00:26:58.593 08:23:31 -- accel/accel.sh@22 -- # case "$var" in 00:26:58.593 08:23:31 -- accel/accel.sh@20 -- # IFS=: 00:26:58.593 08:23:31 -- accel/accel.sh@20 -- # read -r var val 00:26:58.593 08:23:31 -- accel/accel.sh@21 -- # val=compare 00:26:58.593 08:23:31 -- accel/accel.sh@22 -- # case "$var" in 00:26:58.593 08:23:31 -- accel/accel.sh@24 -- # accel_opc=compare 00:26:58.593 08:23:31 -- accel/accel.sh@20 -- # IFS=: 00:26:58.593 08:23:31 -- accel/accel.sh@20 -- # read -r var val 00:26:58.593 08:23:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:26:58.593 08:23:31 -- accel/accel.sh@22 -- # case "$var" in 00:26:58.593 08:23:31 -- accel/accel.sh@20 -- # IFS=: 00:26:58.593 08:23:31 -- accel/accel.sh@20 -- # read -r var val 00:26:58.593 08:23:31 -- accel/accel.sh@21 -- # val= 00:26:58.593 08:23:31 -- accel/accel.sh@22 -- # case "$var" in 00:26:58.593 08:23:31 -- accel/accel.sh@20 -- # IFS=: 00:26:58.593 08:23:31 -- accel/accel.sh@20 -- # read -r var val 00:26:58.593 08:23:31 -- accel/accel.sh@21 -- # val=software 00:26:58.593 08:23:31 -- accel/accel.sh@22 -- # case "$var" in 00:26:58.593 08:23:31 -- accel/accel.sh@23 -- # accel_module=software 00:26:58.593 08:23:31 -- accel/accel.sh@20 -- # IFS=: 00:26:58.853 08:23:31 -- accel/accel.sh@20 -- # read -r var val 00:26:58.853 08:23:31 -- accel/accel.sh@21 -- # val=32 00:26:58.853 08:23:31 -- accel/accel.sh@22 -- # case "$var" in 00:26:58.853 08:23:31 -- accel/accel.sh@20 -- # IFS=: 00:26:58.853 08:23:31 -- accel/accel.sh@20 -- # read -r var val 00:26:58.853 08:23:31 -- accel/accel.sh@21 -- # val=32 00:26:58.853 08:23:31 -- accel/accel.sh@22 -- # case "$var" in 00:26:58.853 08:23:31 -- accel/accel.sh@20 -- # IFS=: 00:26:58.853 08:23:31 -- accel/accel.sh@20 -- # read -r var val 00:26:58.853 08:23:31 -- accel/accel.sh@21 -- # val=1 00:26:58.853 08:23:31 -- accel/accel.sh@22 -- # case "$var" in 00:26:58.853 08:23:31 -- accel/accel.sh@20 -- # IFS=: 00:26:58.853 08:23:31 -- accel/accel.sh@20 -- # read -r var val 00:26:58.853 08:23:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:26:58.853 08:23:31 -- accel/accel.sh@22 -- # case "$var" in 00:26:58.853 08:23:31 -- accel/accel.sh@20 -- # IFS=: 00:26:58.853 08:23:31 -- accel/accel.sh@20 -- # read -r var val 00:26:58.853 08:23:31 -- accel/accel.sh@21 -- # val=Yes 00:26:58.853 08:23:31 -- accel/accel.sh@22 -- # case "$var" in 00:26:58.853 08:23:31 -- accel/accel.sh@20 -- # IFS=: 00:26:58.853 08:23:31 -- accel/accel.sh@20 -- # read -r var val 00:26:58.853 08:23:31 -- accel/accel.sh@21 -- # val= 00:26:58.853 08:23:31 -- accel/accel.sh@22 -- # case "$var" in 00:26:58.853 08:23:31 -- accel/accel.sh@20 -- # IFS=: 00:26:58.853 08:23:31 -- accel/accel.sh@20 -- # read -r var val 00:26:58.853 08:23:31 -- accel/accel.sh@21 -- # val= 00:26:58.853 08:23:31 -- accel/accel.sh@22 -- # case "$var" in 00:26:58.853 08:23:31 -- accel/accel.sh@20 -- # IFS=: 00:26:58.853 08:23:31 -- accel/accel.sh@20 -- # read -r var val 00:26:59.790 08:23:33 -- accel/accel.sh@21 -- # val= 00:26:59.790 08:23:33 -- accel/accel.sh@22 -- # case "$var" in 00:26:59.790 08:23:33 -- accel/accel.sh@20 -- # IFS=: 00:26:59.790 08:23:33 -- accel/accel.sh@20 -- # read -r var val 00:26:59.790 08:23:33 -- accel/accel.sh@21 -- # val= 00:26:59.790 08:23:33 -- accel/accel.sh@22 -- # case "$var" in 00:26:59.790 08:23:33 -- accel/accel.sh@20 -- # IFS=: 00:26:59.790 08:23:33 -- accel/accel.sh@20 -- # read -r var val 00:26:59.790 08:23:33 -- accel/accel.sh@21 -- # val= 00:26:59.790 08:23:33 -- accel/accel.sh@22 -- # case "$var" in 00:26:59.790 08:23:33 -- accel/accel.sh@20 -- # IFS=: 00:26:59.790 08:23:33 -- accel/accel.sh@20 -- # read -r var val 00:26:59.790 08:23:33 -- accel/accel.sh@21 -- # val= 00:26:59.790 08:23:33 -- accel/accel.sh@22 -- # case "$var" in 00:26:59.790 08:23:33 -- accel/accel.sh@20 -- # IFS=: 00:26:59.790 08:23:33 -- accel/accel.sh@20 -- # read -r var val 00:26:59.790 08:23:33 -- accel/accel.sh@21 -- # val= 00:26:59.790 08:23:33 -- accel/accel.sh@22 -- # case "$var" in 00:26:59.790 08:23:33 -- accel/accel.sh@20 -- # IFS=: 00:26:59.790 08:23:33 -- accel/accel.sh@20 -- # read -r var val 00:26:59.790 08:23:33 -- accel/accel.sh@21 -- # val= 00:26:59.790 08:23:33 -- accel/accel.sh@22 -- # case "$var" in 00:26:59.790 08:23:33 -- accel/accel.sh@20 -- # IFS=: 00:26:59.790 08:23:33 -- accel/accel.sh@20 -- # read -r var val 00:26:59.790 08:23:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:26:59.790 08:23:33 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:26:59.790 08:23:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:59.790 00:26:59.790 real 0m2.982s 00:26:59.790 user 0m2.588s 00:26:59.790 sys 0m0.194s 00:26:59.790 08:23:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:59.790 08:23:33 -- common/autotest_common.sh@10 -- # set +x 00:26:59.790 ************************************ 00:26:59.790 END TEST accel_compare 00:26:59.790 ************************************ 00:27:00.049 08:23:33 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:27:00.049 08:23:33 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:27:00.049 08:23:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:00.049 08:23:33 -- common/autotest_common.sh@10 -- # set +x 00:27:00.049 ************************************ 00:27:00.049 START TEST accel_xor 00:27:00.049 ************************************ 00:27:00.049 08:23:33 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:27:00.049 08:23:33 -- accel/accel.sh@16 -- # local accel_opc 00:27:00.049 08:23:33 -- accel/accel.sh@17 -- # local accel_module 00:27:00.049 08:23:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:27:00.049 08:23:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:27:00.049 08:23:33 -- accel/accel.sh@12 -- # build_accel_config 00:27:00.049 08:23:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:27:00.049 08:23:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:27:00.049 08:23:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:27:00.049 08:23:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:27:00.049 08:23:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:27:00.049 08:23:33 -- accel/accel.sh@41 -- # local IFS=, 00:27:00.049 08:23:33 -- accel/accel.sh@42 -- # jq -r . 00:27:00.049 [2024-04-17 08:23:33.175131] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:00.049 [2024-04-17 08:23:33.175268] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59006 ] 00:27:00.049 [2024-04-17 08:23:33.317035] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.308 [2024-04-17 08:23:33.422731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:01.690 08:23:34 -- accel/accel.sh@18 -- # out=' 00:27:01.690 SPDK Configuration: 00:27:01.690 Core mask: 0x1 00:27:01.690 00:27:01.690 Accel Perf Configuration: 00:27:01.690 Workload Type: xor 00:27:01.690 Source buffers: 2 00:27:01.690 Transfer size: 4096 bytes 00:27:01.690 Vector count 1 00:27:01.690 Module: software 00:27:01.690 Queue depth: 32 00:27:01.690 Allocate depth: 32 00:27:01.690 # threads/core: 1 00:27:01.690 Run time: 1 seconds 00:27:01.690 Verify: Yes 00:27:01.690 00:27:01.690 Running for 1 seconds... 00:27:01.690 00:27:01.690 Core,Thread Transfers Bandwidth Failed Miscompares 00:27:01.690 ------------------------------------------------------------------------------------ 00:27:01.690 0,0 352704/s 1377 MiB/s 0 0 00:27:01.690 ==================================================================================== 00:27:01.690 Total 352704/s 1377 MiB/s 0 0' 00:27:01.690 08:23:34 -- accel/accel.sh@20 -- # IFS=: 00:27:01.690 08:23:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:27:01.690 08:23:34 -- accel/accel.sh@20 -- # read -r var val 00:27:01.690 08:23:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:27:01.690 08:23:34 -- accel/accel.sh@12 -- # build_accel_config 00:27:01.690 08:23:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:27:01.690 08:23:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:27:01.690 08:23:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:27:01.690 08:23:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:27:01.690 08:23:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:27:01.690 08:23:34 -- accel/accel.sh@41 -- # local IFS=, 00:27:01.690 08:23:34 -- accel/accel.sh@42 -- # jq -r . 00:27:01.690 [2024-04-17 08:23:34.669618] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:01.690 [2024-04-17 08:23:34.669698] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59030 ] 00:27:01.690 [2024-04-17 08:23:34.810059] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.690 [2024-04-17 08:23:34.915284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:01.690 08:23:34 -- accel/accel.sh@21 -- # val= 00:27:01.690 08:23:34 -- accel/accel.sh@22 -- # case "$var" in 00:27:01.690 08:23:34 -- accel/accel.sh@20 -- # IFS=: 00:27:01.690 08:23:34 -- accel/accel.sh@20 -- # read -r var val 00:27:01.690 08:23:34 -- accel/accel.sh@21 -- # val= 00:27:01.690 08:23:34 -- accel/accel.sh@22 -- # case "$var" in 00:27:01.690 08:23:34 -- accel/accel.sh@20 -- # IFS=: 00:27:01.690 08:23:34 -- accel/accel.sh@20 -- # read -r var val 00:27:01.690 08:23:34 -- accel/accel.sh@21 -- # val=0x1 00:27:01.690 08:23:34 -- accel/accel.sh@22 -- # case "$var" in 00:27:01.690 08:23:34 -- accel/accel.sh@20 -- # IFS=: 00:27:01.690 08:23:34 -- accel/accel.sh@20 -- # read -r var val 00:27:01.690 08:23:34 -- accel/accel.sh@21 -- # val= 00:27:01.690 08:23:34 -- accel/accel.sh@22 -- # case "$var" in 00:27:01.690 08:23:34 -- accel/accel.sh@20 -- # IFS=: 00:27:01.690 08:23:34 -- accel/accel.sh@20 -- # read -r var val 00:27:01.690 08:23:34 -- accel/accel.sh@21 -- # val= 00:27:01.690 08:23:34 -- accel/accel.sh@22 -- # case "$var" in 00:27:01.690 08:23:34 -- accel/accel.sh@20 -- # IFS=: 00:27:01.690 08:23:34 -- accel/accel.sh@20 -- # read -r var val 00:27:01.690 08:23:34 -- accel/accel.sh@21 -- # val=xor 00:27:01.690 08:23:34 -- accel/accel.sh@22 -- # case "$var" in 00:27:01.690 08:23:34 -- accel/accel.sh@24 -- # accel_opc=xor 00:27:01.690 08:23:34 -- accel/accel.sh@20 -- # IFS=: 00:27:01.690 08:23:34 -- accel/accel.sh@20 -- # read -r var val 00:27:01.690 08:23:34 -- accel/accel.sh@21 -- # val=2 00:27:01.690 08:23:34 -- accel/accel.sh@22 -- # case "$var" in 00:27:01.690 08:23:34 -- accel/accel.sh@20 -- # IFS=: 00:27:01.690 08:23:34 -- accel/accel.sh@20 -- # read -r var val 00:27:01.690 08:23:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:27:01.690 08:23:34 -- accel/accel.sh@22 -- # case "$var" in 00:27:01.690 08:23:34 -- accel/accel.sh@20 -- # IFS=: 00:27:01.690 08:23:34 -- accel/accel.sh@20 -- # read -r var val 00:27:01.690 08:23:34 -- accel/accel.sh@21 -- # val= 00:27:01.690 08:23:34 -- accel/accel.sh@22 -- # case "$var" in 00:27:01.690 08:23:34 -- accel/accel.sh@20 -- # IFS=: 00:27:01.690 08:23:34 -- accel/accel.sh@20 -- # read -r var val 00:27:01.690 08:23:34 -- accel/accel.sh@21 -- # val=software 00:27:01.690 08:23:34 -- accel/accel.sh@22 -- # case "$var" in 00:27:01.690 08:23:34 -- accel/accel.sh@23 -- # accel_module=software 00:27:01.690 08:23:34 -- accel/accel.sh@20 -- # IFS=: 00:27:01.690 08:23:34 -- accel/accel.sh@20 -- # read -r var val 00:27:01.690 08:23:34 -- accel/accel.sh@21 -- # val=32 00:27:01.690 08:23:34 -- accel/accel.sh@22 -- # case "$var" in 00:27:01.690 08:23:34 -- accel/accel.sh@20 -- # IFS=: 00:27:01.690 08:23:34 -- accel/accel.sh@20 -- # read -r var val 00:27:01.690 08:23:34 -- accel/accel.sh@21 -- # val=32 00:27:01.690 08:23:34 -- accel/accel.sh@22 -- # case "$var" in 00:27:01.690 08:23:34 -- accel/accel.sh@20 -- # IFS=: 00:27:01.690 08:23:34 -- accel/accel.sh@20 -- # read -r var val 00:27:01.690 08:23:34 -- accel/accel.sh@21 -- # val=1 00:27:01.690 08:23:34 -- accel/accel.sh@22 -- # case "$var" in 00:27:01.690 08:23:34 -- accel/accel.sh@20 -- # IFS=: 00:27:01.690 08:23:34 -- accel/accel.sh@20 -- # read -r var val 00:27:01.690 08:23:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:27:01.690 08:23:34 -- accel/accel.sh@22 -- # case "$var" in 00:27:01.690 08:23:34 -- accel/accel.sh@20 -- # IFS=: 00:27:01.690 08:23:34 -- accel/accel.sh@20 -- # read -r var val 00:27:01.690 08:23:34 -- accel/accel.sh@21 -- # val=Yes 00:27:01.690 08:23:34 -- accel/accel.sh@22 -- # case "$var" in 00:27:01.690 08:23:34 -- accel/accel.sh@20 -- # IFS=: 00:27:01.690 08:23:34 -- accel/accel.sh@20 -- # read -r var val 00:27:01.690 08:23:34 -- accel/accel.sh@21 -- # val= 00:27:01.690 08:23:34 -- accel/accel.sh@22 -- # case "$var" in 00:27:01.690 08:23:34 -- accel/accel.sh@20 -- # IFS=: 00:27:01.690 08:23:34 -- accel/accel.sh@20 -- # read -r var val 00:27:01.690 08:23:34 -- accel/accel.sh@21 -- # val= 00:27:01.690 08:23:34 -- accel/accel.sh@22 -- # case "$var" in 00:27:01.690 08:23:34 -- accel/accel.sh@20 -- # IFS=: 00:27:01.690 08:23:34 -- accel/accel.sh@20 -- # read -r var val 00:27:03.071 08:23:36 -- accel/accel.sh@21 -- # val= 00:27:03.071 08:23:36 -- accel/accel.sh@22 -- # case "$var" in 00:27:03.071 08:23:36 -- accel/accel.sh@20 -- # IFS=: 00:27:03.071 08:23:36 -- accel/accel.sh@20 -- # read -r var val 00:27:03.071 08:23:36 -- accel/accel.sh@21 -- # val= 00:27:03.071 08:23:36 -- accel/accel.sh@22 -- # case "$var" in 00:27:03.071 08:23:36 -- accel/accel.sh@20 -- # IFS=: 00:27:03.071 08:23:36 -- accel/accel.sh@20 -- # read -r var val 00:27:03.071 08:23:36 -- accel/accel.sh@21 -- # val= 00:27:03.071 08:23:36 -- accel/accel.sh@22 -- # case "$var" in 00:27:03.071 08:23:36 -- accel/accel.sh@20 -- # IFS=: 00:27:03.071 08:23:36 -- accel/accel.sh@20 -- # read -r var val 00:27:03.071 08:23:36 -- accel/accel.sh@21 -- # val= 00:27:03.071 08:23:36 -- accel/accel.sh@22 -- # case "$var" in 00:27:03.071 08:23:36 -- accel/accel.sh@20 -- # IFS=: 00:27:03.071 08:23:36 -- accel/accel.sh@20 -- # read -r var val 00:27:03.071 08:23:36 -- accel/accel.sh@21 -- # val= 00:27:03.071 08:23:36 -- accel/accel.sh@22 -- # case "$var" in 00:27:03.071 08:23:36 -- accel/accel.sh@20 -- # IFS=: 00:27:03.071 08:23:36 -- accel/accel.sh@20 -- # read -r var val 00:27:03.071 08:23:36 -- accel/accel.sh@21 -- # val= 00:27:03.071 08:23:36 -- accel/accel.sh@22 -- # case "$var" in 00:27:03.071 08:23:36 -- accel/accel.sh@20 -- # IFS=: 00:27:03.071 08:23:36 -- accel/accel.sh@20 -- # read -r var val 00:27:03.071 08:23:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:27:03.071 08:23:36 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:27:03.071 08:23:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:03.071 00:27:03.071 real 0m2.983s 00:27:03.071 user 0m2.574s 00:27:03.071 sys 0m0.203s 00:27:03.071 08:23:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:03.071 08:23:36 -- common/autotest_common.sh@10 -- # set +x 00:27:03.071 ************************************ 00:27:03.071 END TEST accel_xor 00:27:03.071 ************************************ 00:27:03.071 08:23:36 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:27:03.071 08:23:36 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:27:03.071 08:23:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:03.071 08:23:36 -- common/autotest_common.sh@10 -- # set +x 00:27:03.071 ************************************ 00:27:03.071 START TEST accel_xor 00:27:03.071 ************************************ 00:27:03.071 08:23:36 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:27:03.071 08:23:36 -- accel/accel.sh@16 -- # local accel_opc 00:27:03.071 08:23:36 -- accel/accel.sh@17 -- # local accel_module 00:27:03.071 08:23:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:27:03.071 08:23:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:27:03.071 08:23:36 -- accel/accel.sh@12 -- # build_accel_config 00:27:03.071 08:23:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:27:03.071 08:23:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:27:03.071 08:23:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:27:03.071 08:23:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:27:03.071 08:23:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:27:03.071 08:23:36 -- accel/accel.sh@41 -- # local IFS=, 00:27:03.071 08:23:36 -- accel/accel.sh@42 -- # jq -r . 00:27:03.071 [2024-04-17 08:23:36.218388] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:03.071 [2024-04-17 08:23:36.218586] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59059 ] 00:27:03.071 [2024-04-17 08:23:36.354730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.330 [2024-04-17 08:23:36.455206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.707 08:23:37 -- accel/accel.sh@18 -- # out=' 00:27:04.707 SPDK Configuration: 00:27:04.707 Core mask: 0x1 00:27:04.707 00:27:04.707 Accel Perf Configuration: 00:27:04.707 Workload Type: xor 00:27:04.707 Source buffers: 3 00:27:04.707 Transfer size: 4096 bytes 00:27:04.707 Vector count 1 00:27:04.707 Module: software 00:27:04.707 Queue depth: 32 00:27:04.707 Allocate depth: 32 00:27:04.707 # threads/core: 1 00:27:04.707 Run time: 1 seconds 00:27:04.707 Verify: Yes 00:27:04.707 00:27:04.707 Running for 1 seconds... 00:27:04.707 00:27:04.707 Core,Thread Transfers Bandwidth Failed Miscompares 00:27:04.707 ------------------------------------------------------------------------------------ 00:27:04.707 0,0 391712/s 1530 MiB/s 0 0 00:27:04.707 ==================================================================================== 00:27:04.707 Total 391712/s 1530 MiB/s 0 0' 00:27:04.707 08:23:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:27:04.707 08:23:37 -- accel/accel.sh@20 -- # IFS=: 00:27:04.707 08:23:37 -- accel/accel.sh@20 -- # read -r var val 00:27:04.707 08:23:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:27:04.707 08:23:37 -- accel/accel.sh@12 -- # build_accel_config 00:27:04.707 08:23:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:27:04.707 08:23:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:27:04.707 08:23:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:27:04.707 08:23:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:27:04.707 08:23:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:27:04.707 08:23:37 -- accel/accel.sh@41 -- # local IFS=, 00:27:04.707 08:23:37 -- accel/accel.sh@42 -- # jq -r . 00:27:04.707 [2024-04-17 08:23:37.676395] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:04.707 [2024-04-17 08:23:37.676532] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59084 ] 00:27:04.707 [2024-04-17 08:23:37.807835] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.707 [2024-04-17 08:23:37.906680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.707 08:23:37 -- accel/accel.sh@21 -- # val= 00:27:04.707 08:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:27:04.707 08:23:37 -- accel/accel.sh@20 -- # IFS=: 00:27:04.707 08:23:37 -- accel/accel.sh@20 -- # read -r var val 00:27:04.707 08:23:37 -- accel/accel.sh@21 -- # val= 00:27:04.707 08:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:27:04.707 08:23:37 -- accel/accel.sh@20 -- # IFS=: 00:27:04.707 08:23:37 -- accel/accel.sh@20 -- # read -r var val 00:27:04.707 08:23:37 -- accel/accel.sh@21 -- # val=0x1 00:27:04.707 08:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:27:04.707 08:23:37 -- accel/accel.sh@20 -- # IFS=: 00:27:04.707 08:23:37 -- accel/accel.sh@20 -- # read -r var val 00:27:04.707 08:23:37 -- accel/accel.sh@21 -- # val= 00:27:04.707 08:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:27:04.707 08:23:37 -- accel/accel.sh@20 -- # IFS=: 00:27:04.707 08:23:37 -- accel/accel.sh@20 -- # read -r var val 00:27:04.707 08:23:37 -- accel/accel.sh@21 -- # val= 00:27:04.707 08:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:27:04.707 08:23:37 -- accel/accel.sh@20 -- # IFS=: 00:27:04.707 08:23:37 -- accel/accel.sh@20 -- # read -r var val 00:27:04.707 08:23:37 -- accel/accel.sh@21 -- # val=xor 00:27:04.707 08:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:27:04.707 08:23:37 -- accel/accel.sh@24 -- # accel_opc=xor 00:27:04.707 08:23:37 -- accel/accel.sh@20 -- # IFS=: 00:27:04.707 08:23:37 -- accel/accel.sh@20 -- # read -r var val 00:27:04.707 08:23:37 -- accel/accel.sh@21 -- # val=3 00:27:04.707 08:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:27:04.707 08:23:37 -- accel/accel.sh@20 -- # IFS=: 00:27:04.707 08:23:37 -- accel/accel.sh@20 -- # read -r var val 00:27:04.707 08:23:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:27:04.707 08:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:27:04.707 08:23:37 -- accel/accel.sh@20 -- # IFS=: 00:27:04.707 08:23:37 -- accel/accel.sh@20 -- # read -r var val 00:27:04.707 08:23:37 -- accel/accel.sh@21 -- # val= 00:27:04.707 08:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:27:04.707 08:23:37 -- accel/accel.sh@20 -- # IFS=: 00:27:04.707 08:23:37 -- accel/accel.sh@20 -- # read -r var val 00:27:04.707 08:23:37 -- accel/accel.sh@21 -- # val=software 00:27:04.707 08:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:27:04.707 08:23:37 -- accel/accel.sh@23 -- # accel_module=software 00:27:04.707 08:23:37 -- accel/accel.sh@20 -- # IFS=: 00:27:04.707 08:23:37 -- accel/accel.sh@20 -- # read -r var val 00:27:04.707 08:23:37 -- accel/accel.sh@21 -- # val=32 00:27:04.707 08:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:27:04.707 08:23:37 -- accel/accel.sh@20 -- # IFS=: 00:27:04.707 08:23:37 -- accel/accel.sh@20 -- # read -r var val 00:27:04.707 08:23:37 -- accel/accel.sh@21 -- # val=32 00:27:04.707 08:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:27:04.707 08:23:37 -- accel/accel.sh@20 -- # IFS=: 00:27:04.707 08:23:37 -- accel/accel.sh@20 -- # read -r var val 00:27:04.707 08:23:37 -- accel/accel.sh@21 -- # val=1 00:27:04.707 08:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:27:04.707 08:23:37 -- accel/accel.sh@20 -- # IFS=: 00:27:04.707 08:23:37 -- accel/accel.sh@20 -- # read -r var val 00:27:04.707 08:23:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:27:04.707 08:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:27:04.707 08:23:37 -- accel/accel.sh@20 -- # IFS=: 00:27:04.707 08:23:37 -- accel/accel.sh@20 -- # read -r var val 00:27:04.707 08:23:37 -- accel/accel.sh@21 -- # val=Yes 00:27:04.707 08:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:27:04.707 08:23:37 -- accel/accel.sh@20 -- # IFS=: 00:27:04.707 08:23:37 -- accel/accel.sh@20 -- # read -r var val 00:27:04.707 08:23:37 -- accel/accel.sh@21 -- # val= 00:27:04.707 08:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:27:04.707 08:23:37 -- accel/accel.sh@20 -- # IFS=: 00:27:04.707 08:23:37 -- accel/accel.sh@20 -- # read -r var val 00:27:04.707 08:23:37 -- accel/accel.sh@21 -- # val= 00:27:04.707 08:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:27:04.707 08:23:37 -- accel/accel.sh@20 -- # IFS=: 00:27:04.707 08:23:37 -- accel/accel.sh@20 -- # read -r var val 00:27:06.085 08:23:39 -- accel/accel.sh@21 -- # val= 00:27:06.085 08:23:39 -- accel/accel.sh@22 -- # case "$var" in 00:27:06.085 08:23:39 -- accel/accel.sh@20 -- # IFS=: 00:27:06.085 08:23:39 -- accel/accel.sh@20 -- # read -r var val 00:27:06.085 08:23:39 -- accel/accel.sh@21 -- # val= 00:27:06.085 08:23:39 -- accel/accel.sh@22 -- # case "$var" in 00:27:06.085 08:23:39 -- accel/accel.sh@20 -- # IFS=: 00:27:06.085 08:23:39 -- accel/accel.sh@20 -- # read -r var val 00:27:06.085 08:23:39 -- accel/accel.sh@21 -- # val= 00:27:06.085 08:23:39 -- accel/accel.sh@22 -- # case "$var" in 00:27:06.085 08:23:39 -- accel/accel.sh@20 -- # IFS=: 00:27:06.085 08:23:39 -- accel/accel.sh@20 -- # read -r var val 00:27:06.085 08:23:39 -- accel/accel.sh@21 -- # val= 00:27:06.085 08:23:39 -- accel/accel.sh@22 -- # case "$var" in 00:27:06.085 08:23:39 -- accel/accel.sh@20 -- # IFS=: 00:27:06.085 08:23:39 -- accel/accel.sh@20 -- # read -r var val 00:27:06.085 08:23:39 -- accel/accel.sh@21 -- # val= 00:27:06.085 08:23:39 -- accel/accel.sh@22 -- # case "$var" in 00:27:06.085 08:23:39 -- accel/accel.sh@20 -- # IFS=: 00:27:06.085 08:23:39 -- accel/accel.sh@20 -- # read -r var val 00:27:06.085 08:23:39 -- accel/accel.sh@21 -- # val= 00:27:06.085 08:23:39 -- accel/accel.sh@22 -- # case "$var" in 00:27:06.085 08:23:39 -- accel/accel.sh@20 -- # IFS=: 00:27:06.085 08:23:39 -- accel/accel.sh@20 -- # read -r var val 00:27:06.085 08:23:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:27:06.085 08:23:39 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:27:06.085 08:23:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:06.085 00:27:06.085 real 0m2.930s 00:27:06.085 user 0m2.548s 00:27:06.085 sys 0m0.184s 00:27:06.085 08:23:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:06.085 08:23:39 -- common/autotest_common.sh@10 -- # set +x 00:27:06.085 ************************************ 00:27:06.085 END TEST accel_xor 00:27:06.085 ************************************ 00:27:06.085 08:23:39 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:27:06.085 08:23:39 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:27:06.085 08:23:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:06.085 08:23:39 -- common/autotest_common.sh@10 -- # set +x 00:27:06.085 ************************************ 00:27:06.085 START TEST accel_dif_verify 00:27:06.085 ************************************ 00:27:06.085 08:23:39 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:27:06.085 08:23:39 -- accel/accel.sh@16 -- # local accel_opc 00:27:06.085 08:23:39 -- accel/accel.sh@17 -- # local accel_module 00:27:06.085 08:23:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:27:06.085 08:23:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:27:06.085 08:23:39 -- accel/accel.sh@12 -- # build_accel_config 00:27:06.085 08:23:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:27:06.085 08:23:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:27:06.085 08:23:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:27:06.085 08:23:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:27:06.085 08:23:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:27:06.085 08:23:39 -- accel/accel.sh@41 -- # local IFS=, 00:27:06.085 08:23:39 -- accel/accel.sh@42 -- # jq -r . 00:27:06.085 [2024-04-17 08:23:39.215078] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:06.085 [2024-04-17 08:23:39.215163] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59113 ] 00:27:06.085 [2024-04-17 08:23:39.357525] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.345 [2024-04-17 08:23:39.450466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:07.336 08:23:40 -- accel/accel.sh@18 -- # out=' 00:27:07.336 SPDK Configuration: 00:27:07.336 Core mask: 0x1 00:27:07.336 00:27:07.336 Accel Perf Configuration: 00:27:07.336 Workload Type: dif_verify 00:27:07.336 Vector size: 4096 bytes 00:27:07.336 Transfer size: 4096 bytes 00:27:07.336 Block size: 512 bytes 00:27:07.336 Metadata size: 8 bytes 00:27:07.336 Vector count 1 00:27:07.336 Module: software 00:27:07.336 Queue depth: 32 00:27:07.336 Allocate depth: 32 00:27:07.336 # threads/core: 1 00:27:07.336 Run time: 1 seconds 00:27:07.336 Verify: No 00:27:07.336 00:27:07.336 Running for 1 seconds... 00:27:07.336 00:27:07.336 Core,Thread Transfers Bandwidth Failed Miscompares 00:27:07.336 ------------------------------------------------------------------------------------ 00:27:07.336 0,0 120736/s 478 MiB/s 0 0 00:27:07.336 ==================================================================================== 00:27:07.336 Total 120736/s 471 MiB/s 0 0' 00:27:07.336 08:23:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:27:07.336 08:23:40 -- accel/accel.sh@20 -- # IFS=: 00:27:07.336 08:23:40 -- accel/accel.sh@20 -- # read -r var val 00:27:07.336 08:23:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:27:07.336 08:23:40 -- accel/accel.sh@12 -- # build_accel_config 00:27:07.336 08:23:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:27:07.336 08:23:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:27:07.336 08:23:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:27:07.336 08:23:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:27:07.336 08:23:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:27:07.336 08:23:40 -- accel/accel.sh@41 -- # local IFS=, 00:27:07.336 08:23:40 -- accel/accel.sh@42 -- # jq -r . 00:27:07.595 [2024-04-17 08:23:40.686358] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:07.595 [2024-04-17 08:23:40.686462] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59138 ] 00:27:07.595 [2024-04-17 08:23:40.826075] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.854 [2024-04-17 08:23:40.927647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:07.854 08:23:40 -- accel/accel.sh@21 -- # val= 00:27:07.854 08:23:40 -- accel/accel.sh@22 -- # case "$var" in 00:27:07.854 08:23:40 -- accel/accel.sh@20 -- # IFS=: 00:27:07.854 08:23:40 -- accel/accel.sh@20 -- # read -r var val 00:27:07.854 08:23:40 -- accel/accel.sh@21 -- # val= 00:27:07.854 08:23:40 -- accel/accel.sh@22 -- # case "$var" in 00:27:07.854 08:23:40 -- accel/accel.sh@20 -- # IFS=: 00:27:07.854 08:23:40 -- accel/accel.sh@20 -- # read -r var val 00:27:07.854 08:23:40 -- accel/accel.sh@21 -- # val=0x1 00:27:07.854 08:23:40 -- accel/accel.sh@22 -- # case "$var" in 00:27:07.854 08:23:40 -- accel/accel.sh@20 -- # IFS=: 00:27:07.854 08:23:40 -- accel/accel.sh@20 -- # read -r var val 00:27:07.854 08:23:40 -- accel/accel.sh@21 -- # val= 00:27:07.854 08:23:40 -- accel/accel.sh@22 -- # case "$var" in 00:27:07.854 08:23:40 -- accel/accel.sh@20 -- # IFS=: 00:27:07.854 08:23:40 -- accel/accel.sh@20 -- # read -r var val 00:27:07.854 08:23:40 -- accel/accel.sh@21 -- # val= 00:27:07.854 08:23:40 -- accel/accel.sh@22 -- # case "$var" in 00:27:07.854 08:23:40 -- accel/accel.sh@20 -- # IFS=: 00:27:07.854 08:23:40 -- accel/accel.sh@20 -- # read -r var val 00:27:07.854 08:23:40 -- accel/accel.sh@21 -- # val=dif_verify 00:27:07.854 08:23:40 -- accel/accel.sh@22 -- # case "$var" in 00:27:07.854 08:23:40 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:27:07.854 08:23:40 -- accel/accel.sh@20 -- # IFS=: 00:27:07.854 08:23:40 -- accel/accel.sh@20 -- # read -r var val 00:27:07.854 08:23:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:27:07.854 08:23:40 -- accel/accel.sh@22 -- # case "$var" in 00:27:07.854 08:23:40 -- accel/accel.sh@20 -- # IFS=: 00:27:07.854 08:23:40 -- accel/accel.sh@20 -- # read -r var val 00:27:07.854 08:23:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:27:07.854 08:23:40 -- accel/accel.sh@22 -- # case "$var" in 00:27:07.854 08:23:40 -- accel/accel.sh@20 -- # IFS=: 00:27:07.854 08:23:40 -- accel/accel.sh@20 -- # read -r var val 00:27:07.854 08:23:40 -- accel/accel.sh@21 -- # val='512 bytes' 00:27:07.854 08:23:40 -- accel/accel.sh@22 -- # case "$var" in 00:27:07.854 08:23:40 -- accel/accel.sh@20 -- # IFS=: 00:27:07.854 08:23:40 -- accel/accel.sh@20 -- # read -r var val 00:27:07.854 08:23:40 -- accel/accel.sh@21 -- # val='8 bytes' 00:27:07.854 08:23:40 -- accel/accel.sh@22 -- # case "$var" in 00:27:07.854 08:23:40 -- accel/accel.sh@20 -- # IFS=: 00:27:07.854 08:23:40 -- accel/accel.sh@20 -- # read -r var val 00:27:07.854 08:23:40 -- accel/accel.sh@21 -- # val= 00:27:07.854 08:23:40 -- accel/accel.sh@22 -- # case "$var" in 00:27:07.854 08:23:40 -- accel/accel.sh@20 -- # IFS=: 00:27:07.854 08:23:40 -- accel/accel.sh@20 -- # read -r var val 00:27:07.854 08:23:40 -- accel/accel.sh@21 -- # val=software 00:27:07.854 08:23:40 -- accel/accel.sh@22 -- # case "$var" in 00:27:07.854 08:23:40 -- accel/accel.sh@23 -- # accel_module=software 00:27:07.854 08:23:40 -- accel/accel.sh@20 -- # IFS=: 00:27:07.854 08:23:40 -- accel/accel.sh@20 -- # read -r var val 00:27:07.854 08:23:40 -- accel/accel.sh@21 -- # val=32 00:27:07.854 08:23:40 -- accel/accel.sh@22 -- # case "$var" in 00:27:07.854 08:23:40 -- accel/accel.sh@20 -- # IFS=: 00:27:07.854 08:23:40 -- accel/accel.sh@20 -- # read -r var val 00:27:07.854 08:23:40 -- accel/accel.sh@21 -- # val=32 00:27:07.854 08:23:40 -- accel/accel.sh@22 -- # case "$var" in 00:27:07.854 08:23:40 -- accel/accel.sh@20 -- # IFS=: 00:27:07.854 08:23:40 -- accel/accel.sh@20 -- # read -r var val 00:27:07.854 08:23:40 -- accel/accel.sh@21 -- # val=1 00:27:07.854 08:23:40 -- accel/accel.sh@22 -- # case "$var" in 00:27:07.854 08:23:40 -- accel/accel.sh@20 -- # IFS=: 00:27:07.854 08:23:40 -- accel/accel.sh@20 -- # read -r var val 00:27:07.854 08:23:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:27:07.854 08:23:40 -- accel/accel.sh@22 -- # case "$var" in 00:27:07.854 08:23:40 -- accel/accel.sh@20 -- # IFS=: 00:27:07.854 08:23:40 -- accel/accel.sh@20 -- # read -r var val 00:27:07.854 08:23:40 -- accel/accel.sh@21 -- # val=No 00:27:07.854 08:23:40 -- accel/accel.sh@22 -- # case "$var" in 00:27:07.854 08:23:40 -- accel/accel.sh@20 -- # IFS=: 00:27:07.854 08:23:40 -- accel/accel.sh@20 -- # read -r var val 00:27:07.854 08:23:40 -- accel/accel.sh@21 -- # val= 00:27:07.854 08:23:40 -- accel/accel.sh@22 -- # case "$var" in 00:27:07.854 08:23:40 -- accel/accel.sh@20 -- # IFS=: 00:27:07.854 08:23:40 -- accel/accel.sh@20 -- # read -r var val 00:27:07.854 08:23:40 -- accel/accel.sh@21 -- # val= 00:27:07.854 08:23:40 -- accel/accel.sh@22 -- # case "$var" in 00:27:07.854 08:23:40 -- accel/accel.sh@20 -- # IFS=: 00:27:07.854 08:23:40 -- accel/accel.sh@20 -- # read -r var val 00:27:09.251 08:23:42 -- accel/accel.sh@21 -- # val= 00:27:09.251 08:23:42 -- accel/accel.sh@22 -- # case "$var" in 00:27:09.251 08:23:42 -- accel/accel.sh@20 -- # IFS=: 00:27:09.251 08:23:42 -- accel/accel.sh@20 -- # read -r var val 00:27:09.251 08:23:42 -- accel/accel.sh@21 -- # val= 00:27:09.251 08:23:42 -- accel/accel.sh@22 -- # case "$var" in 00:27:09.251 08:23:42 -- accel/accel.sh@20 -- # IFS=: 00:27:09.251 08:23:42 -- accel/accel.sh@20 -- # read -r var val 00:27:09.251 08:23:42 -- accel/accel.sh@21 -- # val= 00:27:09.251 08:23:42 -- accel/accel.sh@22 -- # case "$var" in 00:27:09.251 08:23:42 -- accel/accel.sh@20 -- # IFS=: 00:27:09.251 08:23:42 -- accel/accel.sh@20 -- # read -r var val 00:27:09.251 08:23:42 -- accel/accel.sh@21 -- # val= 00:27:09.251 08:23:42 -- accel/accel.sh@22 -- # case "$var" in 00:27:09.251 08:23:42 -- accel/accel.sh@20 -- # IFS=: 00:27:09.251 08:23:42 -- accel/accel.sh@20 -- # read -r var val 00:27:09.251 08:23:42 -- accel/accel.sh@21 -- # val= 00:27:09.251 08:23:42 -- accel/accel.sh@22 -- # case "$var" in 00:27:09.251 08:23:42 -- accel/accel.sh@20 -- # IFS=: 00:27:09.251 08:23:42 -- accel/accel.sh@20 -- # read -r var val 00:27:09.251 08:23:42 -- accel/accel.sh@21 -- # val= 00:27:09.251 08:23:42 -- accel/accel.sh@22 -- # case "$var" in 00:27:09.251 08:23:42 -- accel/accel.sh@20 -- # IFS=: 00:27:09.251 08:23:42 -- accel/accel.sh@20 -- # read -r var val 00:27:09.251 08:23:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:27:09.251 08:23:42 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:27:09.251 08:23:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:09.251 00:27:09.251 real 0m2.960s 00:27:09.251 user 0m2.575s 00:27:09.251 sys 0m0.189s 00:27:09.251 08:23:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:09.251 08:23:42 -- common/autotest_common.sh@10 -- # set +x 00:27:09.251 ************************************ 00:27:09.251 END TEST accel_dif_verify 00:27:09.251 ************************************ 00:27:09.251 08:23:42 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:27:09.251 08:23:42 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:27:09.251 08:23:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:09.251 08:23:42 -- common/autotest_common.sh@10 -- # set +x 00:27:09.251 ************************************ 00:27:09.251 START TEST accel_dif_generate 00:27:09.251 ************************************ 00:27:09.251 08:23:42 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:27:09.251 08:23:42 -- accel/accel.sh@16 -- # local accel_opc 00:27:09.251 08:23:42 -- accel/accel.sh@17 -- # local accel_module 00:27:09.251 08:23:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:27:09.251 08:23:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:27:09.251 08:23:42 -- accel/accel.sh@12 -- # build_accel_config 00:27:09.251 08:23:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:27:09.251 08:23:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:27:09.251 08:23:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:27:09.251 08:23:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:27:09.251 08:23:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:27:09.251 08:23:42 -- accel/accel.sh@41 -- # local IFS=, 00:27:09.251 08:23:42 -- accel/accel.sh@42 -- # jq -r . 00:27:09.251 [2024-04-17 08:23:42.226217] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:09.251 [2024-04-17 08:23:42.226376] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59167 ] 00:27:09.251 [2024-04-17 08:23:42.366025] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.251 [2024-04-17 08:23:42.473429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.631 08:23:43 -- accel/accel.sh@18 -- # out=' 00:27:10.631 SPDK Configuration: 00:27:10.631 Core mask: 0x1 00:27:10.631 00:27:10.631 Accel Perf Configuration: 00:27:10.631 Workload Type: dif_generate 00:27:10.631 Vector size: 4096 bytes 00:27:10.631 Transfer size: 4096 bytes 00:27:10.631 Block size: 512 bytes 00:27:10.631 Metadata size: 8 bytes 00:27:10.631 Vector count 1 00:27:10.631 Module: software 00:27:10.631 Queue depth: 32 00:27:10.631 Allocate depth: 32 00:27:10.631 # threads/core: 1 00:27:10.631 Run time: 1 seconds 00:27:10.631 Verify: No 00:27:10.631 00:27:10.631 Running for 1 seconds... 00:27:10.631 00:27:10.631 Core,Thread Transfers Bandwidth Failed Miscompares 00:27:10.631 ------------------------------------------------------------------------------------ 00:27:10.631 0,0 123584/s 490 MiB/s 0 0 00:27:10.631 ==================================================================================== 00:27:10.631 Total 123584/s 482 MiB/s 0 0' 00:27:10.631 08:23:43 -- accel/accel.sh@20 -- # IFS=: 00:27:10.631 08:23:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:27:10.631 08:23:43 -- accel/accel.sh@20 -- # read -r var val 00:27:10.631 08:23:43 -- accel/accel.sh@12 -- # build_accel_config 00:27:10.631 08:23:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:27:10.631 08:23:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:27:10.631 08:23:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:27:10.631 08:23:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:27:10.631 08:23:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:27:10.631 08:23:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:27:10.631 08:23:43 -- accel/accel.sh@41 -- # local IFS=, 00:27:10.631 08:23:43 -- accel/accel.sh@42 -- # jq -r . 00:27:10.631 [2024-04-17 08:23:43.721123] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:10.631 [2024-04-17 08:23:43.721211] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59187 ] 00:27:10.631 [2024-04-17 08:23:43.863080] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.890 [2024-04-17 08:23:43.967130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.890 08:23:44 -- accel/accel.sh@21 -- # val= 00:27:10.890 08:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:27:10.890 08:23:44 -- accel/accel.sh@20 -- # IFS=: 00:27:10.890 08:23:44 -- accel/accel.sh@20 -- # read -r var val 00:27:10.890 08:23:44 -- accel/accel.sh@21 -- # val= 00:27:10.890 08:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:27:10.890 08:23:44 -- accel/accel.sh@20 -- # IFS=: 00:27:10.890 08:23:44 -- accel/accel.sh@20 -- # read -r var val 00:27:10.890 08:23:44 -- accel/accel.sh@21 -- # val=0x1 00:27:10.890 08:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:27:10.890 08:23:44 -- accel/accel.sh@20 -- # IFS=: 00:27:10.890 08:23:44 -- accel/accel.sh@20 -- # read -r var val 00:27:10.890 08:23:44 -- accel/accel.sh@21 -- # val= 00:27:10.890 08:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:27:10.890 08:23:44 -- accel/accel.sh@20 -- # IFS=: 00:27:10.890 08:23:44 -- accel/accel.sh@20 -- # read -r var val 00:27:10.890 08:23:44 -- accel/accel.sh@21 -- # val= 00:27:10.890 08:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:27:10.890 08:23:44 -- accel/accel.sh@20 -- # IFS=: 00:27:10.890 08:23:44 -- accel/accel.sh@20 -- # read -r var val 00:27:10.890 08:23:44 -- accel/accel.sh@21 -- # val=dif_generate 00:27:10.890 08:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:27:10.890 08:23:44 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:27:10.890 08:23:44 -- accel/accel.sh@20 -- # IFS=: 00:27:10.890 08:23:44 -- accel/accel.sh@20 -- # read -r var val 00:27:10.890 08:23:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:27:10.890 08:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:27:10.890 08:23:44 -- accel/accel.sh@20 -- # IFS=: 00:27:10.890 08:23:44 -- accel/accel.sh@20 -- # read -r var val 00:27:10.890 08:23:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:27:10.890 08:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:27:10.890 08:23:44 -- accel/accel.sh@20 -- # IFS=: 00:27:10.890 08:23:44 -- accel/accel.sh@20 -- # read -r var val 00:27:10.890 08:23:44 -- accel/accel.sh@21 -- # val='512 bytes' 00:27:10.890 08:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:27:10.890 08:23:44 -- accel/accel.sh@20 -- # IFS=: 00:27:10.890 08:23:44 -- accel/accel.sh@20 -- # read -r var val 00:27:10.890 08:23:44 -- accel/accel.sh@21 -- # val='8 bytes' 00:27:10.890 08:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:27:10.890 08:23:44 -- accel/accel.sh@20 -- # IFS=: 00:27:10.890 08:23:44 -- accel/accel.sh@20 -- # read -r var val 00:27:10.890 08:23:44 -- accel/accel.sh@21 -- # val= 00:27:10.890 08:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:27:10.890 08:23:44 -- accel/accel.sh@20 -- # IFS=: 00:27:10.890 08:23:44 -- accel/accel.sh@20 -- # read -r var val 00:27:10.890 08:23:44 -- accel/accel.sh@21 -- # val=software 00:27:10.890 08:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:27:10.890 08:23:44 -- accel/accel.sh@23 -- # accel_module=software 00:27:10.890 08:23:44 -- accel/accel.sh@20 -- # IFS=: 00:27:10.890 08:23:44 -- accel/accel.sh@20 -- # read -r var val 00:27:10.890 08:23:44 -- accel/accel.sh@21 -- # val=32 00:27:10.890 08:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:27:10.890 08:23:44 -- accel/accel.sh@20 -- # IFS=: 00:27:10.890 08:23:44 -- accel/accel.sh@20 -- # read -r var val 00:27:10.890 08:23:44 -- accel/accel.sh@21 -- # val=32 00:27:10.890 08:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:27:10.890 08:23:44 -- accel/accel.sh@20 -- # IFS=: 00:27:10.890 08:23:44 -- accel/accel.sh@20 -- # read -r var val 00:27:10.890 08:23:44 -- accel/accel.sh@21 -- # val=1 00:27:10.890 08:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:27:10.890 08:23:44 -- accel/accel.sh@20 -- # IFS=: 00:27:10.890 08:23:44 -- accel/accel.sh@20 -- # read -r var val 00:27:10.890 08:23:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:27:10.890 08:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:27:10.891 08:23:44 -- accel/accel.sh@20 -- # IFS=: 00:27:10.891 08:23:44 -- accel/accel.sh@20 -- # read -r var val 00:27:10.891 08:23:44 -- accel/accel.sh@21 -- # val=No 00:27:10.891 08:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:27:10.891 08:23:44 -- accel/accel.sh@20 -- # IFS=: 00:27:10.891 08:23:44 -- accel/accel.sh@20 -- # read -r var val 00:27:10.891 08:23:44 -- accel/accel.sh@21 -- # val= 00:27:10.891 08:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:27:10.891 08:23:44 -- accel/accel.sh@20 -- # IFS=: 00:27:10.891 08:23:44 -- accel/accel.sh@20 -- # read -r var val 00:27:10.891 08:23:44 -- accel/accel.sh@21 -- # val= 00:27:10.891 08:23:44 -- accel/accel.sh@22 -- # case "$var" in 00:27:10.891 08:23:44 -- accel/accel.sh@20 -- # IFS=: 00:27:10.891 08:23:44 -- accel/accel.sh@20 -- # read -r var val 00:27:12.270 08:23:45 -- accel/accel.sh@21 -- # val= 00:27:12.270 08:23:45 -- accel/accel.sh@22 -- # case "$var" in 00:27:12.270 08:23:45 -- accel/accel.sh@20 -- # IFS=: 00:27:12.270 08:23:45 -- accel/accel.sh@20 -- # read -r var val 00:27:12.270 08:23:45 -- accel/accel.sh@21 -- # val= 00:27:12.270 08:23:45 -- accel/accel.sh@22 -- # case "$var" in 00:27:12.270 08:23:45 -- accel/accel.sh@20 -- # IFS=: 00:27:12.270 08:23:45 -- accel/accel.sh@20 -- # read -r var val 00:27:12.270 08:23:45 -- accel/accel.sh@21 -- # val= 00:27:12.270 08:23:45 -- accel/accel.sh@22 -- # case "$var" in 00:27:12.270 08:23:45 -- accel/accel.sh@20 -- # IFS=: 00:27:12.270 08:23:45 -- accel/accel.sh@20 -- # read -r var val 00:27:12.270 08:23:45 -- accel/accel.sh@21 -- # val= 00:27:12.270 08:23:45 -- accel/accel.sh@22 -- # case "$var" in 00:27:12.270 08:23:45 -- accel/accel.sh@20 -- # IFS=: 00:27:12.270 08:23:45 -- accel/accel.sh@20 -- # read -r var val 00:27:12.270 08:23:45 -- accel/accel.sh@21 -- # val= 00:27:12.270 08:23:45 -- accel/accel.sh@22 -- # case "$var" in 00:27:12.270 08:23:45 -- accel/accel.sh@20 -- # IFS=: 00:27:12.270 08:23:45 -- accel/accel.sh@20 -- # read -r var val 00:27:12.270 08:23:45 -- accel/accel.sh@21 -- # val= 00:27:12.270 08:23:45 -- accel/accel.sh@22 -- # case "$var" in 00:27:12.270 08:23:45 -- accel/accel.sh@20 -- # IFS=: 00:27:12.270 08:23:45 -- accel/accel.sh@20 -- # read -r var val 00:27:12.270 08:23:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:27:12.270 08:23:45 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:27:12.270 08:23:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:12.270 00:27:12.270 real 0m2.990s 00:27:12.270 user 0m2.593s 00:27:12.270 sys 0m0.198s 00:27:12.270 08:23:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:12.270 08:23:45 -- common/autotest_common.sh@10 -- # set +x 00:27:12.270 ************************************ 00:27:12.270 END TEST accel_dif_generate 00:27:12.270 ************************************ 00:27:12.270 08:23:45 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:27:12.270 08:23:45 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:27:12.270 08:23:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:12.270 08:23:45 -- common/autotest_common.sh@10 -- # set +x 00:27:12.270 ************************************ 00:27:12.270 START TEST accel_dif_generate_copy 00:27:12.270 ************************************ 00:27:12.270 08:23:45 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:27:12.270 08:23:45 -- accel/accel.sh@16 -- # local accel_opc 00:27:12.270 08:23:45 -- accel/accel.sh@17 -- # local accel_module 00:27:12.270 08:23:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:27:12.270 08:23:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:27:12.270 08:23:45 -- accel/accel.sh@12 -- # build_accel_config 00:27:12.270 08:23:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:27:12.270 08:23:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:27:12.270 08:23:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:27:12.270 08:23:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:27:12.270 08:23:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:27:12.270 08:23:45 -- accel/accel.sh@41 -- # local IFS=, 00:27:12.270 08:23:45 -- accel/accel.sh@42 -- # jq -r . 00:27:12.270 [2024-04-17 08:23:45.265336] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:12.270 [2024-04-17 08:23:45.265543] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59221 ] 00:27:12.270 [2024-04-17 08:23:45.406221] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.270 [2024-04-17 08:23:45.512685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.647 08:23:46 -- accel/accel.sh@18 -- # out=' 00:27:13.647 SPDK Configuration: 00:27:13.647 Core mask: 0x1 00:27:13.647 00:27:13.647 Accel Perf Configuration: 00:27:13.647 Workload Type: dif_generate_copy 00:27:13.647 Vector size: 4096 bytes 00:27:13.647 Transfer size: 4096 bytes 00:27:13.647 Vector count 1 00:27:13.647 Module: software 00:27:13.647 Queue depth: 32 00:27:13.647 Allocate depth: 32 00:27:13.647 # threads/core: 1 00:27:13.647 Run time: 1 seconds 00:27:13.647 Verify: No 00:27:13.647 00:27:13.647 Running for 1 seconds... 00:27:13.647 00:27:13.647 Core,Thread Transfers Bandwidth Failed Miscompares 00:27:13.647 ------------------------------------------------------------------------------------ 00:27:13.647 0,0 94656/s 375 MiB/s 0 0 00:27:13.647 ==================================================================================== 00:27:13.647 Total 94656/s 369 MiB/s 0 0' 00:27:13.647 08:23:46 -- accel/accel.sh@20 -- # IFS=: 00:27:13.647 08:23:46 -- accel/accel.sh@20 -- # read -r var val 00:27:13.647 08:23:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:27:13.647 08:23:46 -- accel/accel.sh@12 -- # build_accel_config 00:27:13.647 08:23:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:27:13.647 08:23:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:27:13.647 08:23:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:27:13.647 08:23:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:27:13.647 08:23:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:27:13.647 08:23:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:27:13.647 08:23:46 -- accel/accel.sh@41 -- # local IFS=, 00:27:13.647 08:23:46 -- accel/accel.sh@42 -- # jq -r . 00:27:13.647 [2024-04-17 08:23:46.759816] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:13.648 [2024-04-17 08:23:46.759893] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59243 ] 00:27:13.648 [2024-04-17 08:23:46.900102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.906 [2024-04-17 08:23:47.007779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.906 08:23:47 -- accel/accel.sh@21 -- # val= 00:27:13.906 08:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:27:13.906 08:23:47 -- accel/accel.sh@20 -- # IFS=: 00:27:13.906 08:23:47 -- accel/accel.sh@20 -- # read -r var val 00:27:13.906 08:23:47 -- accel/accel.sh@21 -- # val= 00:27:13.906 08:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:27:13.906 08:23:47 -- accel/accel.sh@20 -- # IFS=: 00:27:13.906 08:23:47 -- accel/accel.sh@20 -- # read -r var val 00:27:13.906 08:23:47 -- accel/accel.sh@21 -- # val=0x1 00:27:13.906 08:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:27:13.906 08:23:47 -- accel/accel.sh@20 -- # IFS=: 00:27:13.906 08:23:47 -- accel/accel.sh@20 -- # read -r var val 00:27:13.906 08:23:47 -- accel/accel.sh@21 -- # val= 00:27:13.906 08:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:27:13.906 08:23:47 -- accel/accel.sh@20 -- # IFS=: 00:27:13.906 08:23:47 -- accel/accel.sh@20 -- # read -r var val 00:27:13.906 08:23:47 -- accel/accel.sh@21 -- # val= 00:27:13.906 08:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:27:13.906 08:23:47 -- accel/accel.sh@20 -- # IFS=: 00:27:13.906 08:23:47 -- accel/accel.sh@20 -- # read -r var val 00:27:13.906 08:23:47 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:27:13.906 08:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:27:13.906 08:23:47 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:27:13.906 08:23:47 -- accel/accel.sh@20 -- # IFS=: 00:27:13.906 08:23:47 -- accel/accel.sh@20 -- # read -r var val 00:27:13.906 08:23:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:27:13.906 08:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:27:13.906 08:23:47 -- accel/accel.sh@20 -- # IFS=: 00:27:13.906 08:23:47 -- accel/accel.sh@20 -- # read -r var val 00:27:13.906 08:23:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:27:13.906 08:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:27:13.906 08:23:47 -- accel/accel.sh@20 -- # IFS=: 00:27:13.906 08:23:47 -- accel/accel.sh@20 -- # read -r var val 00:27:13.906 08:23:47 -- accel/accel.sh@21 -- # val= 00:27:13.906 08:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:27:13.906 08:23:47 -- accel/accel.sh@20 -- # IFS=: 00:27:13.906 08:23:47 -- accel/accel.sh@20 -- # read -r var val 00:27:13.906 08:23:47 -- accel/accel.sh@21 -- # val=software 00:27:13.906 08:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:27:13.906 08:23:47 -- accel/accel.sh@23 -- # accel_module=software 00:27:13.906 08:23:47 -- accel/accel.sh@20 -- # IFS=: 00:27:13.906 08:23:47 -- accel/accel.sh@20 -- # read -r var val 00:27:13.906 08:23:47 -- accel/accel.sh@21 -- # val=32 00:27:13.906 08:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:27:13.906 08:23:47 -- accel/accel.sh@20 -- # IFS=: 00:27:13.906 08:23:47 -- accel/accel.sh@20 -- # read -r var val 00:27:13.906 08:23:47 -- accel/accel.sh@21 -- # val=32 00:27:13.906 08:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:27:13.906 08:23:47 -- accel/accel.sh@20 -- # IFS=: 00:27:13.906 08:23:47 -- accel/accel.sh@20 -- # read -r var val 00:27:13.906 08:23:47 -- accel/accel.sh@21 -- # val=1 00:27:13.906 08:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:27:13.906 08:23:47 -- accel/accel.sh@20 -- # IFS=: 00:27:13.906 08:23:47 -- accel/accel.sh@20 -- # read -r var val 00:27:13.906 08:23:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:27:13.906 08:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:27:13.906 08:23:47 -- accel/accel.sh@20 -- # IFS=: 00:27:13.906 08:23:47 -- accel/accel.sh@20 -- # read -r var val 00:27:13.906 08:23:47 -- accel/accel.sh@21 -- # val=No 00:27:13.906 08:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:27:13.906 08:23:47 -- accel/accel.sh@20 -- # IFS=: 00:27:13.906 08:23:47 -- accel/accel.sh@20 -- # read -r var val 00:27:13.906 08:23:47 -- accel/accel.sh@21 -- # val= 00:27:13.906 08:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:27:13.906 08:23:47 -- accel/accel.sh@20 -- # IFS=: 00:27:13.906 08:23:47 -- accel/accel.sh@20 -- # read -r var val 00:27:13.906 08:23:47 -- accel/accel.sh@21 -- # val= 00:27:13.906 08:23:47 -- accel/accel.sh@22 -- # case "$var" in 00:27:13.906 08:23:47 -- accel/accel.sh@20 -- # IFS=: 00:27:13.906 08:23:47 -- accel/accel.sh@20 -- # read -r var val 00:27:15.297 08:23:48 -- accel/accel.sh@21 -- # val= 00:27:15.297 08:23:48 -- accel/accel.sh@22 -- # case "$var" in 00:27:15.297 08:23:48 -- accel/accel.sh@20 -- # IFS=: 00:27:15.297 08:23:48 -- accel/accel.sh@20 -- # read -r var val 00:27:15.297 08:23:48 -- accel/accel.sh@21 -- # val= 00:27:15.297 08:23:48 -- accel/accel.sh@22 -- # case "$var" in 00:27:15.297 08:23:48 -- accel/accel.sh@20 -- # IFS=: 00:27:15.297 08:23:48 -- accel/accel.sh@20 -- # read -r var val 00:27:15.297 08:23:48 -- accel/accel.sh@21 -- # val= 00:27:15.297 08:23:48 -- accel/accel.sh@22 -- # case "$var" in 00:27:15.297 08:23:48 -- accel/accel.sh@20 -- # IFS=: 00:27:15.297 08:23:48 -- accel/accel.sh@20 -- # read -r var val 00:27:15.297 08:23:48 -- accel/accel.sh@21 -- # val= 00:27:15.297 08:23:48 -- accel/accel.sh@22 -- # case "$var" in 00:27:15.297 08:23:48 -- accel/accel.sh@20 -- # IFS=: 00:27:15.297 08:23:48 -- accel/accel.sh@20 -- # read -r var val 00:27:15.297 08:23:48 -- accel/accel.sh@21 -- # val= 00:27:15.297 08:23:48 -- accel/accel.sh@22 -- # case "$var" in 00:27:15.297 08:23:48 -- accel/accel.sh@20 -- # IFS=: 00:27:15.297 08:23:48 -- accel/accel.sh@20 -- # read -r var val 00:27:15.297 08:23:48 -- accel/accel.sh@21 -- # val= 00:27:15.297 08:23:48 -- accel/accel.sh@22 -- # case "$var" in 00:27:15.297 08:23:48 -- accel/accel.sh@20 -- # IFS=: 00:27:15.297 08:23:48 -- accel/accel.sh@20 -- # read -r var val 00:27:15.297 08:23:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:27:15.297 08:23:48 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:27:15.297 08:23:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:15.297 00:27:15.297 real 0m2.997s 00:27:15.297 user 0m2.588s 00:27:15.297 sys 0m0.198s 00:27:15.297 08:23:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:15.297 08:23:48 -- common/autotest_common.sh@10 -- # set +x 00:27:15.297 ************************************ 00:27:15.297 END TEST accel_dif_generate_copy 00:27:15.297 ************************************ 00:27:15.297 08:23:48 -- accel/accel.sh@107 -- # [[ y == y ]] 00:27:15.297 08:23:48 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:27:15.297 08:23:48 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:27:15.297 08:23:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:15.297 08:23:48 -- common/autotest_common.sh@10 -- # set +x 00:27:15.297 ************************************ 00:27:15.297 START TEST accel_comp 00:27:15.297 ************************************ 00:27:15.297 08:23:48 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:27:15.297 08:23:48 -- accel/accel.sh@16 -- # local accel_opc 00:27:15.297 08:23:48 -- accel/accel.sh@17 -- # local accel_module 00:27:15.297 08:23:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:27:15.297 08:23:48 -- accel/accel.sh@12 -- # build_accel_config 00:27:15.297 08:23:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:27:15.297 08:23:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:27:15.297 08:23:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:27:15.297 08:23:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:27:15.297 08:23:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:27:15.297 08:23:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:27:15.297 08:23:48 -- accel/accel.sh@41 -- # local IFS=, 00:27:15.297 08:23:48 -- accel/accel.sh@42 -- # jq -r . 00:27:15.297 [2024-04-17 08:23:48.317785] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:15.297 [2024-04-17 08:23:48.317905] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59277 ] 00:27:15.297 [2024-04-17 08:23:48.457680] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.297 [2024-04-17 08:23:48.570728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.674 08:23:49 -- accel/accel.sh@18 -- # out='Preparing input file... 00:27:16.674 00:27:16.674 SPDK Configuration: 00:27:16.674 Core mask: 0x1 00:27:16.674 00:27:16.674 Accel Perf Configuration: 00:27:16.674 Workload Type: compress 00:27:16.674 Transfer size: 4096 bytes 00:27:16.674 Vector count 1 00:27:16.674 Module: software 00:27:16.674 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:27:16.674 Queue depth: 32 00:27:16.674 Allocate depth: 32 00:27:16.674 # threads/core: 1 00:27:16.674 Run time: 1 seconds 00:27:16.674 Verify: No 00:27:16.674 00:27:16.674 Running for 1 seconds... 00:27:16.674 00:27:16.674 Core,Thread Transfers Bandwidth Failed Miscompares 00:27:16.674 ------------------------------------------------------------------------------------ 00:27:16.674 0,0 40832/s 170 MiB/s 0 0 00:27:16.674 ==================================================================================== 00:27:16.674 Total 40832/s 159 MiB/s 0 0' 00:27:16.674 08:23:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:27:16.674 08:23:49 -- accel/accel.sh@20 -- # IFS=: 00:27:16.674 08:23:49 -- accel/accel.sh@20 -- # read -r var val 00:27:16.674 08:23:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:27:16.674 08:23:49 -- accel/accel.sh@12 -- # build_accel_config 00:27:16.674 08:23:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:27:16.674 08:23:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:27:16.674 08:23:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:27:16.674 08:23:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:27:16.674 08:23:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:27:16.674 08:23:49 -- accel/accel.sh@41 -- # local IFS=, 00:27:16.674 08:23:49 -- accel/accel.sh@42 -- # jq -r . 00:27:16.674 [2024-04-17 08:23:49.805934] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:16.674 [2024-04-17 08:23:49.806006] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59297 ] 00:27:16.674 [2024-04-17 08:23:49.943627] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.934 [2024-04-17 08:23:50.047129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.934 08:23:50 -- accel/accel.sh@21 -- # val= 00:27:16.934 08:23:50 -- accel/accel.sh@22 -- # case "$var" in 00:27:16.934 08:23:50 -- accel/accel.sh@20 -- # IFS=: 00:27:16.934 08:23:50 -- accel/accel.sh@20 -- # read -r var val 00:27:16.934 08:23:50 -- accel/accel.sh@21 -- # val= 00:27:16.934 08:23:50 -- accel/accel.sh@22 -- # case "$var" in 00:27:16.934 08:23:50 -- accel/accel.sh@20 -- # IFS=: 00:27:16.934 08:23:50 -- accel/accel.sh@20 -- # read -r var val 00:27:16.934 08:23:50 -- accel/accel.sh@21 -- # val= 00:27:16.934 08:23:50 -- accel/accel.sh@22 -- # case "$var" in 00:27:16.934 08:23:50 -- accel/accel.sh@20 -- # IFS=: 00:27:16.934 08:23:50 -- accel/accel.sh@20 -- # read -r var val 00:27:16.934 08:23:50 -- accel/accel.sh@21 -- # val=0x1 00:27:16.934 08:23:50 -- accel/accel.sh@22 -- # case "$var" in 00:27:16.934 08:23:50 -- accel/accel.sh@20 -- # IFS=: 00:27:16.934 08:23:50 -- accel/accel.sh@20 -- # read -r var val 00:27:16.934 08:23:50 -- accel/accel.sh@21 -- # val= 00:27:16.934 08:23:50 -- accel/accel.sh@22 -- # case "$var" in 00:27:16.934 08:23:50 -- accel/accel.sh@20 -- # IFS=: 00:27:16.934 08:23:50 -- accel/accel.sh@20 -- # read -r var val 00:27:16.934 08:23:50 -- accel/accel.sh@21 -- # val= 00:27:16.934 08:23:50 -- accel/accel.sh@22 -- # case "$var" in 00:27:16.934 08:23:50 -- accel/accel.sh@20 -- # IFS=: 00:27:16.934 08:23:50 -- accel/accel.sh@20 -- # read -r var val 00:27:16.934 08:23:50 -- accel/accel.sh@21 -- # val=compress 00:27:16.934 08:23:50 -- accel/accel.sh@22 -- # case "$var" in 00:27:16.934 08:23:50 -- accel/accel.sh@24 -- # accel_opc=compress 00:27:16.934 08:23:50 -- accel/accel.sh@20 -- # IFS=: 00:27:16.934 08:23:50 -- accel/accel.sh@20 -- # read -r var val 00:27:16.934 08:23:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:27:16.934 08:23:50 -- accel/accel.sh@22 -- # case "$var" in 00:27:16.934 08:23:50 -- accel/accel.sh@20 -- # IFS=: 00:27:16.934 08:23:50 -- accel/accel.sh@20 -- # read -r var val 00:27:16.934 08:23:50 -- accel/accel.sh@21 -- # val= 00:27:16.934 08:23:50 -- accel/accel.sh@22 -- # case "$var" in 00:27:16.934 08:23:50 -- accel/accel.sh@20 -- # IFS=: 00:27:16.934 08:23:50 -- accel/accel.sh@20 -- # read -r var val 00:27:16.934 08:23:50 -- accel/accel.sh@21 -- # val=software 00:27:16.934 08:23:50 -- accel/accel.sh@22 -- # case "$var" in 00:27:16.934 08:23:50 -- accel/accel.sh@23 -- # accel_module=software 00:27:16.934 08:23:50 -- accel/accel.sh@20 -- # IFS=: 00:27:16.934 08:23:50 -- accel/accel.sh@20 -- # read -r var val 00:27:16.934 08:23:50 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:27:16.934 08:23:50 -- accel/accel.sh@22 -- # case "$var" in 00:27:16.934 08:23:50 -- accel/accel.sh@20 -- # IFS=: 00:27:16.934 08:23:50 -- accel/accel.sh@20 -- # read -r var val 00:27:16.934 08:23:50 -- accel/accel.sh@21 -- # val=32 00:27:16.934 08:23:50 -- accel/accel.sh@22 -- # case "$var" in 00:27:16.934 08:23:50 -- accel/accel.sh@20 -- # IFS=: 00:27:16.934 08:23:50 -- accel/accel.sh@20 -- # read -r var val 00:27:16.934 08:23:50 -- accel/accel.sh@21 -- # val=32 00:27:16.934 08:23:50 -- accel/accel.sh@22 -- # case "$var" in 00:27:16.934 08:23:50 -- accel/accel.sh@20 -- # IFS=: 00:27:16.934 08:23:50 -- accel/accel.sh@20 -- # read -r var val 00:27:16.934 08:23:50 -- accel/accel.sh@21 -- # val=1 00:27:16.934 08:23:50 -- accel/accel.sh@22 -- # case "$var" in 00:27:16.934 08:23:50 -- accel/accel.sh@20 -- # IFS=: 00:27:16.934 08:23:50 -- accel/accel.sh@20 -- # read -r var val 00:27:16.934 08:23:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:27:16.934 08:23:50 -- accel/accel.sh@22 -- # case "$var" in 00:27:16.934 08:23:50 -- accel/accel.sh@20 -- # IFS=: 00:27:16.934 08:23:50 -- accel/accel.sh@20 -- # read -r var val 00:27:16.934 08:23:50 -- accel/accel.sh@21 -- # val=No 00:27:16.934 08:23:50 -- accel/accel.sh@22 -- # case "$var" in 00:27:16.934 08:23:50 -- accel/accel.sh@20 -- # IFS=: 00:27:16.934 08:23:50 -- accel/accel.sh@20 -- # read -r var val 00:27:16.934 08:23:50 -- accel/accel.sh@21 -- # val= 00:27:16.934 08:23:50 -- accel/accel.sh@22 -- # case "$var" in 00:27:16.934 08:23:50 -- accel/accel.sh@20 -- # IFS=: 00:27:16.934 08:23:50 -- accel/accel.sh@20 -- # read -r var val 00:27:16.934 08:23:50 -- accel/accel.sh@21 -- # val= 00:27:16.934 08:23:50 -- accel/accel.sh@22 -- # case "$var" in 00:27:16.934 08:23:50 -- accel/accel.sh@20 -- # IFS=: 00:27:16.934 08:23:50 -- accel/accel.sh@20 -- # read -r var val 00:27:18.329 08:23:51 -- accel/accel.sh@21 -- # val= 00:27:18.329 08:23:51 -- accel/accel.sh@22 -- # case "$var" in 00:27:18.329 08:23:51 -- accel/accel.sh@20 -- # IFS=: 00:27:18.329 08:23:51 -- accel/accel.sh@20 -- # read -r var val 00:27:18.329 08:23:51 -- accel/accel.sh@21 -- # val= 00:27:18.329 08:23:51 -- accel/accel.sh@22 -- # case "$var" in 00:27:18.329 08:23:51 -- accel/accel.sh@20 -- # IFS=: 00:27:18.329 08:23:51 -- accel/accel.sh@20 -- # read -r var val 00:27:18.329 08:23:51 -- accel/accel.sh@21 -- # val= 00:27:18.329 08:23:51 -- accel/accel.sh@22 -- # case "$var" in 00:27:18.329 08:23:51 -- accel/accel.sh@20 -- # IFS=: 00:27:18.329 08:23:51 -- accel/accel.sh@20 -- # read -r var val 00:27:18.329 08:23:51 -- accel/accel.sh@21 -- # val= 00:27:18.329 08:23:51 -- accel/accel.sh@22 -- # case "$var" in 00:27:18.329 08:23:51 -- accel/accel.sh@20 -- # IFS=: 00:27:18.329 08:23:51 -- accel/accel.sh@20 -- # read -r var val 00:27:18.329 08:23:51 -- accel/accel.sh@21 -- # val= 00:27:18.329 08:23:51 -- accel/accel.sh@22 -- # case "$var" in 00:27:18.329 08:23:51 -- accel/accel.sh@20 -- # IFS=: 00:27:18.329 08:23:51 -- accel/accel.sh@20 -- # read -r var val 00:27:18.329 08:23:51 -- accel/accel.sh@21 -- # val= 00:27:18.329 08:23:51 -- accel/accel.sh@22 -- # case "$var" in 00:27:18.329 08:23:51 -- accel/accel.sh@20 -- # IFS=: 00:27:18.329 08:23:51 -- accel/accel.sh@20 -- # read -r var val 00:27:18.329 08:23:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:27:18.329 08:23:51 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:27:18.329 08:23:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:18.329 00:27:18.329 real 0m2.981s 00:27:18.329 user 0m2.588s 00:27:18.329 sys 0m0.199s 00:27:18.329 08:23:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:18.329 08:23:51 -- common/autotest_common.sh@10 -- # set +x 00:27:18.329 ************************************ 00:27:18.329 END TEST accel_comp 00:27:18.329 ************************************ 00:27:18.329 08:23:51 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:27:18.329 08:23:51 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:27:18.329 08:23:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:18.329 08:23:51 -- common/autotest_common.sh@10 -- # set +x 00:27:18.329 ************************************ 00:27:18.329 START TEST accel_decomp 00:27:18.329 ************************************ 00:27:18.329 08:23:51 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:27:18.329 08:23:51 -- accel/accel.sh@16 -- # local accel_opc 00:27:18.329 08:23:51 -- accel/accel.sh@17 -- # local accel_module 00:27:18.329 08:23:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:27:18.329 08:23:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:27:18.329 08:23:51 -- accel/accel.sh@12 -- # build_accel_config 00:27:18.329 08:23:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:27:18.329 08:23:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:27:18.329 08:23:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:27:18.329 08:23:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:27:18.329 08:23:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:27:18.329 08:23:51 -- accel/accel.sh@41 -- # local IFS=, 00:27:18.329 08:23:51 -- accel/accel.sh@42 -- # jq -r . 00:27:18.329 [2024-04-17 08:23:51.358795] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:18.329 [2024-04-17 08:23:51.358984] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59331 ] 00:27:18.329 [2024-04-17 08:23:51.498173] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.329 [2024-04-17 08:23:51.602509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.701 08:23:52 -- accel/accel.sh@18 -- # out='Preparing input file... 00:27:19.701 00:27:19.701 SPDK Configuration: 00:27:19.701 Core mask: 0x1 00:27:19.701 00:27:19.701 Accel Perf Configuration: 00:27:19.701 Workload Type: decompress 00:27:19.701 Transfer size: 4096 bytes 00:27:19.701 Vector count 1 00:27:19.702 Module: software 00:27:19.702 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:27:19.702 Queue depth: 32 00:27:19.702 Allocate depth: 32 00:27:19.702 # threads/core: 1 00:27:19.702 Run time: 1 seconds 00:27:19.702 Verify: Yes 00:27:19.702 00:27:19.702 Running for 1 seconds... 00:27:19.702 00:27:19.702 Core,Thread Transfers Bandwidth Failed Miscompares 00:27:19.702 ------------------------------------------------------------------------------------ 00:27:19.702 0,0 58240/s 107 MiB/s 0 0 00:27:19.702 ==================================================================================== 00:27:19.702 Total 58240/s 227 MiB/s 0 0' 00:27:19.702 08:23:52 -- accel/accel.sh@20 -- # IFS=: 00:27:19.702 08:23:52 -- accel/accel.sh@20 -- # read -r var val 00:27:19.702 08:23:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:27:19.702 08:23:52 -- accel/accel.sh@12 -- # build_accel_config 00:27:19.702 08:23:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:27:19.702 08:23:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:27:19.702 08:23:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:27:19.702 08:23:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:27:19.702 08:23:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:27:19.702 08:23:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:27:19.702 08:23:52 -- accel/accel.sh@41 -- # local IFS=, 00:27:19.702 08:23:52 -- accel/accel.sh@42 -- # jq -r . 00:27:19.702 [2024-04-17 08:23:52.844414] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:19.702 [2024-04-17 08:23:52.844552] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59351 ] 00:27:19.702 [2024-04-17 08:23:52.974355] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.963 [2024-04-17 08:23:53.074482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.963 08:23:53 -- accel/accel.sh@21 -- # val= 00:27:19.963 08:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:27:19.963 08:23:53 -- accel/accel.sh@20 -- # IFS=: 00:27:19.963 08:23:53 -- accel/accel.sh@20 -- # read -r var val 00:27:19.963 08:23:53 -- accel/accel.sh@21 -- # val= 00:27:19.963 08:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:27:19.963 08:23:53 -- accel/accel.sh@20 -- # IFS=: 00:27:19.963 08:23:53 -- accel/accel.sh@20 -- # read -r var val 00:27:19.963 08:23:53 -- accel/accel.sh@21 -- # val= 00:27:19.963 08:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:27:19.963 08:23:53 -- accel/accel.sh@20 -- # IFS=: 00:27:19.963 08:23:53 -- accel/accel.sh@20 -- # read -r var val 00:27:19.963 08:23:53 -- accel/accel.sh@21 -- # val=0x1 00:27:19.963 08:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:27:19.963 08:23:53 -- accel/accel.sh@20 -- # IFS=: 00:27:19.963 08:23:53 -- accel/accel.sh@20 -- # read -r var val 00:27:19.963 08:23:53 -- accel/accel.sh@21 -- # val= 00:27:19.963 08:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:27:19.963 08:23:53 -- accel/accel.sh@20 -- # IFS=: 00:27:19.963 08:23:53 -- accel/accel.sh@20 -- # read -r var val 00:27:19.963 08:23:53 -- accel/accel.sh@21 -- # val= 00:27:19.963 08:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:27:19.963 08:23:53 -- accel/accel.sh@20 -- # IFS=: 00:27:19.963 08:23:53 -- accel/accel.sh@20 -- # read -r var val 00:27:19.963 08:23:53 -- accel/accel.sh@21 -- # val=decompress 00:27:19.963 08:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:27:19.963 08:23:53 -- accel/accel.sh@24 -- # accel_opc=decompress 00:27:19.963 08:23:53 -- accel/accel.sh@20 -- # IFS=: 00:27:19.963 08:23:53 -- accel/accel.sh@20 -- # read -r var val 00:27:19.963 08:23:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:27:19.963 08:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:27:19.963 08:23:53 -- accel/accel.sh@20 -- # IFS=: 00:27:19.963 08:23:53 -- accel/accel.sh@20 -- # read -r var val 00:27:19.963 08:23:53 -- accel/accel.sh@21 -- # val= 00:27:19.963 08:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:27:19.963 08:23:53 -- accel/accel.sh@20 -- # IFS=: 00:27:19.963 08:23:53 -- accel/accel.sh@20 -- # read -r var val 00:27:19.963 08:23:53 -- accel/accel.sh@21 -- # val=software 00:27:19.963 08:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:27:19.963 08:23:53 -- accel/accel.sh@23 -- # accel_module=software 00:27:19.963 08:23:53 -- accel/accel.sh@20 -- # IFS=: 00:27:19.963 08:23:53 -- accel/accel.sh@20 -- # read -r var val 00:27:19.963 08:23:53 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:27:19.963 08:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:27:19.963 08:23:53 -- accel/accel.sh@20 -- # IFS=: 00:27:19.963 08:23:53 -- accel/accel.sh@20 -- # read -r var val 00:27:19.963 08:23:53 -- accel/accel.sh@21 -- # val=32 00:27:19.963 08:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:27:19.963 08:23:53 -- accel/accel.sh@20 -- # IFS=: 00:27:19.963 08:23:53 -- accel/accel.sh@20 -- # read -r var val 00:27:19.963 08:23:53 -- accel/accel.sh@21 -- # val=32 00:27:19.963 08:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:27:19.963 08:23:53 -- accel/accel.sh@20 -- # IFS=: 00:27:19.963 08:23:53 -- accel/accel.sh@20 -- # read -r var val 00:27:19.963 08:23:53 -- accel/accel.sh@21 -- # val=1 00:27:19.963 08:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:27:19.963 08:23:53 -- accel/accel.sh@20 -- # IFS=: 00:27:19.963 08:23:53 -- accel/accel.sh@20 -- # read -r var val 00:27:19.963 08:23:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:27:19.963 08:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:27:19.963 08:23:53 -- accel/accel.sh@20 -- # IFS=: 00:27:19.963 08:23:53 -- accel/accel.sh@20 -- # read -r var val 00:27:19.963 08:23:53 -- accel/accel.sh@21 -- # val=Yes 00:27:19.963 08:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:27:19.963 08:23:53 -- accel/accel.sh@20 -- # IFS=: 00:27:19.963 08:23:53 -- accel/accel.sh@20 -- # read -r var val 00:27:19.963 08:23:53 -- accel/accel.sh@21 -- # val= 00:27:19.963 08:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:27:19.963 08:23:53 -- accel/accel.sh@20 -- # IFS=: 00:27:19.963 08:23:53 -- accel/accel.sh@20 -- # read -r var val 00:27:19.963 08:23:53 -- accel/accel.sh@21 -- # val= 00:27:19.963 08:23:53 -- accel/accel.sh@22 -- # case "$var" in 00:27:19.963 08:23:53 -- accel/accel.sh@20 -- # IFS=: 00:27:19.963 08:23:53 -- accel/accel.sh@20 -- # read -r var val 00:27:21.343 08:23:54 -- accel/accel.sh@21 -- # val= 00:27:21.343 08:23:54 -- accel/accel.sh@22 -- # case "$var" in 00:27:21.343 08:23:54 -- accel/accel.sh@20 -- # IFS=: 00:27:21.343 08:23:54 -- accel/accel.sh@20 -- # read -r var val 00:27:21.343 08:23:54 -- accel/accel.sh@21 -- # val= 00:27:21.343 08:23:54 -- accel/accel.sh@22 -- # case "$var" in 00:27:21.343 08:23:54 -- accel/accel.sh@20 -- # IFS=: 00:27:21.343 08:23:54 -- accel/accel.sh@20 -- # read -r var val 00:27:21.343 08:23:54 -- accel/accel.sh@21 -- # val= 00:27:21.343 08:23:54 -- accel/accel.sh@22 -- # case "$var" in 00:27:21.343 08:23:54 -- accel/accel.sh@20 -- # IFS=: 00:27:21.343 08:23:54 -- accel/accel.sh@20 -- # read -r var val 00:27:21.343 08:23:54 -- accel/accel.sh@21 -- # val= 00:27:21.343 08:23:54 -- accel/accel.sh@22 -- # case "$var" in 00:27:21.343 08:23:54 -- accel/accel.sh@20 -- # IFS=: 00:27:21.343 08:23:54 -- accel/accel.sh@20 -- # read -r var val 00:27:21.343 08:23:54 -- accel/accel.sh@21 -- # val= 00:27:21.343 08:23:54 -- accel/accel.sh@22 -- # case "$var" in 00:27:21.343 08:23:54 -- accel/accel.sh@20 -- # IFS=: 00:27:21.343 08:23:54 -- accel/accel.sh@20 -- # read -r var val 00:27:21.343 08:23:54 -- accel/accel.sh@21 -- # val= 00:27:21.343 08:23:54 -- accel/accel.sh@22 -- # case "$var" in 00:27:21.343 08:23:54 -- accel/accel.sh@20 -- # IFS=: 00:27:21.343 08:23:54 -- accel/accel.sh@20 -- # read -r var val 00:27:21.343 08:23:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:27:21.343 08:23:54 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:27:21.343 08:23:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:21.343 00:27:21.343 real 0m2.967s 00:27:21.343 user 0m2.579s 00:27:21.343 sys 0m0.189s 00:27:21.343 08:23:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:21.343 08:23:54 -- common/autotest_common.sh@10 -- # set +x 00:27:21.343 ************************************ 00:27:21.343 END TEST accel_decomp 00:27:21.343 ************************************ 00:27:21.343 08:23:54 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:27:21.343 08:23:54 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:27:21.343 08:23:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:21.343 08:23:54 -- common/autotest_common.sh@10 -- # set +x 00:27:21.343 ************************************ 00:27:21.343 START TEST accel_decmop_full 00:27:21.343 ************************************ 00:27:21.343 08:23:54 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:27:21.343 08:23:54 -- accel/accel.sh@16 -- # local accel_opc 00:27:21.343 08:23:54 -- accel/accel.sh@17 -- # local accel_module 00:27:21.343 08:23:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:27:21.343 08:23:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:27:21.343 08:23:54 -- accel/accel.sh@12 -- # build_accel_config 00:27:21.343 08:23:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:27:21.343 08:23:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:27:21.343 08:23:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:27:21.343 08:23:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:27:21.343 08:23:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:27:21.343 08:23:54 -- accel/accel.sh@41 -- # local IFS=, 00:27:21.343 08:23:54 -- accel/accel.sh@42 -- # jq -r . 00:27:21.343 [2024-04-17 08:23:54.384534] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:21.343 [2024-04-17 08:23:54.384719] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59384 ] 00:27:21.343 [2024-04-17 08:23:54.523322] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.343 [2024-04-17 08:23:54.625876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.731 08:23:55 -- accel/accel.sh@18 -- # out='Preparing input file... 00:27:22.731 00:27:22.731 SPDK Configuration: 00:27:22.731 Core mask: 0x1 00:27:22.731 00:27:22.731 Accel Perf Configuration: 00:27:22.731 Workload Type: decompress 00:27:22.731 Transfer size: 111250 bytes 00:27:22.731 Vector count 1 00:27:22.731 Module: software 00:27:22.731 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:27:22.731 Queue depth: 32 00:27:22.731 Allocate depth: 32 00:27:22.731 # threads/core: 1 00:27:22.731 Run time: 1 seconds 00:27:22.731 Verify: Yes 00:27:22.731 00:27:22.731 Running for 1 seconds... 00:27:22.731 00:27:22.731 Core,Thread Transfers Bandwidth Failed Miscompares 00:27:22.731 ------------------------------------------------------------------------------------ 00:27:22.731 0,0 3648/s 150 MiB/s 0 0 00:27:22.731 ==================================================================================== 00:27:22.731 Total 3648/s 387 MiB/s 0 0' 00:27:22.731 08:23:55 -- accel/accel.sh@20 -- # IFS=: 00:27:22.731 08:23:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:27:22.731 08:23:55 -- accel/accel.sh@20 -- # read -r var val 00:27:22.731 08:23:55 -- accel/accel.sh@12 -- # build_accel_config 00:27:22.731 08:23:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:27:22.731 08:23:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:27:22.731 08:23:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:27:22.731 08:23:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:27:22.731 08:23:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:27:22.731 08:23:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:27:22.731 08:23:55 -- accel/accel.sh@41 -- # local IFS=, 00:27:22.731 08:23:55 -- accel/accel.sh@42 -- # jq -r . 00:27:22.731 [2024-04-17 08:23:55.882221] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:22.732 [2024-04-17 08:23:55.882301] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59399 ] 00:27:22.732 [2024-04-17 08:23:56.020997] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.990 [2024-04-17 08:23:56.122935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.990 08:23:56 -- accel/accel.sh@21 -- # val= 00:27:22.990 08:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:27:22.990 08:23:56 -- accel/accel.sh@20 -- # IFS=: 00:27:22.990 08:23:56 -- accel/accel.sh@20 -- # read -r var val 00:27:22.990 08:23:56 -- accel/accel.sh@21 -- # val= 00:27:22.990 08:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:27:22.990 08:23:56 -- accel/accel.sh@20 -- # IFS=: 00:27:22.990 08:23:56 -- accel/accel.sh@20 -- # read -r var val 00:27:22.990 08:23:56 -- accel/accel.sh@21 -- # val= 00:27:22.990 08:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:27:22.990 08:23:56 -- accel/accel.sh@20 -- # IFS=: 00:27:22.990 08:23:56 -- accel/accel.sh@20 -- # read -r var val 00:27:22.990 08:23:56 -- accel/accel.sh@21 -- # val=0x1 00:27:22.991 08:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:27:22.991 08:23:56 -- accel/accel.sh@20 -- # IFS=: 00:27:22.991 08:23:56 -- accel/accel.sh@20 -- # read -r var val 00:27:22.991 08:23:56 -- accel/accel.sh@21 -- # val= 00:27:22.991 08:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:27:22.991 08:23:56 -- accel/accel.sh@20 -- # IFS=: 00:27:22.991 08:23:56 -- accel/accel.sh@20 -- # read -r var val 00:27:22.991 08:23:56 -- accel/accel.sh@21 -- # val= 00:27:22.991 08:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:27:22.991 08:23:56 -- accel/accel.sh@20 -- # IFS=: 00:27:22.991 08:23:56 -- accel/accel.sh@20 -- # read -r var val 00:27:22.991 08:23:56 -- accel/accel.sh@21 -- # val=decompress 00:27:22.991 08:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:27:22.991 08:23:56 -- accel/accel.sh@24 -- # accel_opc=decompress 00:27:22.991 08:23:56 -- accel/accel.sh@20 -- # IFS=: 00:27:22.991 08:23:56 -- accel/accel.sh@20 -- # read -r var val 00:27:22.991 08:23:56 -- accel/accel.sh@21 -- # val='111250 bytes' 00:27:22.991 08:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:27:22.991 08:23:56 -- accel/accel.sh@20 -- # IFS=: 00:27:22.991 08:23:56 -- accel/accel.sh@20 -- # read -r var val 00:27:22.991 08:23:56 -- accel/accel.sh@21 -- # val= 00:27:22.991 08:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:27:22.991 08:23:56 -- accel/accel.sh@20 -- # IFS=: 00:27:22.991 08:23:56 -- accel/accel.sh@20 -- # read -r var val 00:27:22.991 08:23:56 -- accel/accel.sh@21 -- # val=software 00:27:22.991 08:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:27:22.991 08:23:56 -- accel/accel.sh@23 -- # accel_module=software 00:27:22.991 08:23:56 -- accel/accel.sh@20 -- # IFS=: 00:27:22.991 08:23:56 -- accel/accel.sh@20 -- # read -r var val 00:27:22.991 08:23:56 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:27:22.991 08:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:27:22.991 08:23:56 -- accel/accel.sh@20 -- # IFS=: 00:27:22.991 08:23:56 -- accel/accel.sh@20 -- # read -r var val 00:27:22.991 08:23:56 -- accel/accel.sh@21 -- # val=32 00:27:22.991 08:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:27:22.991 08:23:56 -- accel/accel.sh@20 -- # IFS=: 00:27:22.991 08:23:56 -- accel/accel.sh@20 -- # read -r var val 00:27:22.991 08:23:56 -- accel/accel.sh@21 -- # val=32 00:27:22.991 08:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:27:22.991 08:23:56 -- accel/accel.sh@20 -- # IFS=: 00:27:22.991 08:23:56 -- accel/accel.sh@20 -- # read -r var val 00:27:22.991 08:23:56 -- accel/accel.sh@21 -- # val=1 00:27:22.991 08:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:27:22.991 08:23:56 -- accel/accel.sh@20 -- # IFS=: 00:27:22.991 08:23:56 -- accel/accel.sh@20 -- # read -r var val 00:27:22.991 08:23:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:27:22.991 08:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:27:22.991 08:23:56 -- accel/accel.sh@20 -- # IFS=: 00:27:22.991 08:23:56 -- accel/accel.sh@20 -- # read -r var val 00:27:22.991 08:23:56 -- accel/accel.sh@21 -- # val=Yes 00:27:22.991 08:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:27:22.991 08:23:56 -- accel/accel.sh@20 -- # IFS=: 00:27:22.991 08:23:56 -- accel/accel.sh@20 -- # read -r var val 00:27:22.991 08:23:56 -- accel/accel.sh@21 -- # val= 00:27:22.991 08:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:27:22.991 08:23:56 -- accel/accel.sh@20 -- # IFS=: 00:27:22.991 08:23:56 -- accel/accel.sh@20 -- # read -r var val 00:27:22.991 08:23:56 -- accel/accel.sh@21 -- # val= 00:27:22.991 08:23:56 -- accel/accel.sh@22 -- # case "$var" in 00:27:22.991 08:23:56 -- accel/accel.sh@20 -- # IFS=: 00:27:22.991 08:23:56 -- accel/accel.sh@20 -- # read -r var val 00:27:24.380 08:23:57 -- accel/accel.sh@21 -- # val= 00:27:24.380 08:23:57 -- accel/accel.sh@22 -- # case "$var" in 00:27:24.380 08:23:57 -- accel/accel.sh@20 -- # IFS=: 00:27:24.380 08:23:57 -- accel/accel.sh@20 -- # read -r var val 00:27:24.380 08:23:57 -- accel/accel.sh@21 -- # val= 00:27:24.380 08:23:57 -- accel/accel.sh@22 -- # case "$var" in 00:27:24.380 08:23:57 -- accel/accel.sh@20 -- # IFS=: 00:27:24.380 08:23:57 -- accel/accel.sh@20 -- # read -r var val 00:27:24.380 08:23:57 -- accel/accel.sh@21 -- # val= 00:27:24.380 08:23:57 -- accel/accel.sh@22 -- # case "$var" in 00:27:24.380 08:23:57 -- accel/accel.sh@20 -- # IFS=: 00:27:24.380 08:23:57 -- accel/accel.sh@20 -- # read -r var val 00:27:24.380 08:23:57 -- accel/accel.sh@21 -- # val= 00:27:24.380 08:23:57 -- accel/accel.sh@22 -- # case "$var" in 00:27:24.380 08:23:57 -- accel/accel.sh@20 -- # IFS=: 00:27:24.380 08:23:57 -- accel/accel.sh@20 -- # read -r var val 00:27:24.380 08:23:57 -- accel/accel.sh@21 -- # val= 00:27:24.380 08:23:57 -- accel/accel.sh@22 -- # case "$var" in 00:27:24.380 08:23:57 -- accel/accel.sh@20 -- # IFS=: 00:27:24.380 08:23:57 -- accel/accel.sh@20 -- # read -r var val 00:27:24.380 08:23:57 -- accel/accel.sh@21 -- # val= 00:27:24.380 08:23:57 -- accel/accel.sh@22 -- # case "$var" in 00:27:24.381 08:23:57 -- accel/accel.sh@20 -- # IFS=: 00:27:24.381 08:23:57 -- accel/accel.sh@20 -- # read -r var val 00:27:24.381 08:23:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:27:24.381 08:23:57 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:27:24.381 08:23:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:24.381 00:27:24.381 real 0m2.998s 00:27:24.381 user 0m2.617s 00:27:24.381 sys 0m0.180s 00:27:24.381 08:23:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:24.381 08:23:57 -- common/autotest_common.sh@10 -- # set +x 00:27:24.381 ************************************ 00:27:24.381 END TEST accel_decmop_full 00:27:24.381 ************************************ 00:27:24.381 08:23:57 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:27:24.381 08:23:57 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:27:24.381 08:23:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:24.381 08:23:57 -- common/autotest_common.sh@10 -- # set +x 00:27:24.381 ************************************ 00:27:24.381 START TEST accel_decomp_mcore 00:27:24.381 ************************************ 00:27:24.381 08:23:57 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:27:24.381 08:23:57 -- accel/accel.sh@16 -- # local accel_opc 00:27:24.381 08:23:57 -- accel/accel.sh@17 -- # local accel_module 00:27:24.381 08:23:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:27:24.381 08:23:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:27:24.381 08:23:57 -- accel/accel.sh@12 -- # build_accel_config 00:27:24.381 08:23:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:27:24.381 08:23:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:27:24.381 08:23:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:27:24.381 08:23:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:27:24.381 08:23:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:27:24.381 08:23:57 -- accel/accel.sh@41 -- # local IFS=, 00:27:24.381 08:23:57 -- accel/accel.sh@42 -- # jq -r . 00:27:24.381 [2024-04-17 08:23:57.441913] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:24.381 [2024-04-17 08:23:57.442065] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59434 ] 00:27:24.381 [2024-04-17 08:23:57.580386] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:24.381 [2024-04-17 08:23:57.683724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:24.381 [2024-04-17 08:23:57.683772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:24.381 [2024-04-17 08:23:57.683834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:24.381 [2024-04-17 08:23:57.683839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:25.757 08:23:58 -- accel/accel.sh@18 -- # out='Preparing input file... 00:27:25.757 00:27:25.757 SPDK Configuration: 00:27:25.757 Core mask: 0xf 00:27:25.757 00:27:25.757 Accel Perf Configuration: 00:27:25.757 Workload Type: decompress 00:27:25.757 Transfer size: 4096 bytes 00:27:25.758 Vector count 1 00:27:25.758 Module: software 00:27:25.758 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:27:25.758 Queue depth: 32 00:27:25.758 Allocate depth: 32 00:27:25.758 # threads/core: 1 00:27:25.758 Run time: 1 seconds 00:27:25.758 Verify: Yes 00:27:25.758 00:27:25.758 Running for 1 seconds... 00:27:25.758 00:27:25.758 Core,Thread Transfers Bandwidth Failed Miscompares 00:27:25.758 ------------------------------------------------------------------------------------ 00:27:25.758 0,0 46016/s 84 MiB/s 0 0 00:27:25.758 3,0 52640/s 97 MiB/s 0 0 00:27:25.758 2,0 52064/s 95 MiB/s 0 0 00:27:25.758 1,0 52736/s 97 MiB/s 0 0 00:27:25.758 ==================================================================================== 00:27:25.758 Total 203456/s 794 MiB/s 0 0' 00:27:25.758 08:23:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:27:25.758 08:23:58 -- accel/accel.sh@20 -- # IFS=: 00:27:25.758 08:23:58 -- accel/accel.sh@20 -- # read -r var val 00:27:25.758 08:23:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:27:25.758 08:23:58 -- accel/accel.sh@12 -- # build_accel_config 00:27:25.758 08:23:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:27:25.758 08:23:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:27:25.758 08:23:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:27:25.758 08:23:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:27:25.758 08:23:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:27:25.758 08:23:58 -- accel/accel.sh@41 -- # local IFS=, 00:27:25.758 08:23:58 -- accel/accel.sh@42 -- # jq -r . 00:27:25.758 [2024-04-17 08:23:58.930086] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:25.758 [2024-04-17 08:23:58.930221] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59456 ] 00:27:25.758 [2024-04-17 08:23:59.077358] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:26.017 [2024-04-17 08:23:59.185441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.018 [2024-04-17 08:23:59.185249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:26.018 [2024-04-17 08:23:59.185307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:26.018 [2024-04-17 08:23:59.185442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:26.018 08:23:59 -- accel/accel.sh@21 -- # val= 00:27:26.018 08:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:27:26.018 08:23:59 -- accel/accel.sh@20 -- # IFS=: 00:27:26.018 08:23:59 -- accel/accel.sh@20 -- # read -r var val 00:27:26.018 08:23:59 -- accel/accel.sh@21 -- # val= 00:27:26.018 08:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:27:26.018 08:23:59 -- accel/accel.sh@20 -- # IFS=: 00:27:26.018 08:23:59 -- accel/accel.sh@20 -- # read -r var val 00:27:26.018 08:23:59 -- accel/accel.sh@21 -- # val= 00:27:26.018 08:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:27:26.018 08:23:59 -- accel/accel.sh@20 -- # IFS=: 00:27:26.018 08:23:59 -- accel/accel.sh@20 -- # read -r var val 00:27:26.018 08:23:59 -- accel/accel.sh@21 -- # val=0xf 00:27:26.018 08:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:27:26.018 08:23:59 -- accel/accel.sh@20 -- # IFS=: 00:27:26.018 08:23:59 -- accel/accel.sh@20 -- # read -r var val 00:27:26.018 08:23:59 -- accel/accel.sh@21 -- # val= 00:27:26.018 08:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:27:26.018 08:23:59 -- accel/accel.sh@20 -- # IFS=: 00:27:26.018 08:23:59 -- accel/accel.sh@20 -- # read -r var val 00:27:26.018 08:23:59 -- accel/accel.sh@21 -- # val= 00:27:26.018 08:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:27:26.018 08:23:59 -- accel/accel.sh@20 -- # IFS=: 00:27:26.018 08:23:59 -- accel/accel.sh@20 -- # read -r var val 00:27:26.018 08:23:59 -- accel/accel.sh@21 -- # val=decompress 00:27:26.018 08:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:27:26.018 08:23:59 -- accel/accel.sh@24 -- # accel_opc=decompress 00:27:26.018 08:23:59 -- accel/accel.sh@20 -- # IFS=: 00:27:26.018 08:23:59 -- accel/accel.sh@20 -- # read -r var val 00:27:26.018 08:23:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:27:26.018 08:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:27:26.018 08:23:59 -- accel/accel.sh@20 -- # IFS=: 00:27:26.018 08:23:59 -- accel/accel.sh@20 -- # read -r var val 00:27:26.018 08:23:59 -- accel/accel.sh@21 -- # val= 00:27:26.018 08:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:27:26.018 08:23:59 -- accel/accel.sh@20 -- # IFS=: 00:27:26.018 08:23:59 -- accel/accel.sh@20 -- # read -r var val 00:27:26.018 08:23:59 -- accel/accel.sh@21 -- # val=software 00:27:26.018 08:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:27:26.018 08:23:59 -- accel/accel.sh@23 -- # accel_module=software 00:27:26.018 08:23:59 -- accel/accel.sh@20 -- # IFS=: 00:27:26.018 08:23:59 -- accel/accel.sh@20 -- # read -r var val 00:27:26.018 08:23:59 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:27:26.018 08:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:27:26.018 08:23:59 -- accel/accel.sh@20 -- # IFS=: 00:27:26.018 08:23:59 -- accel/accel.sh@20 -- # read -r var val 00:27:26.018 08:23:59 -- accel/accel.sh@21 -- # val=32 00:27:26.018 08:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:27:26.018 08:23:59 -- accel/accel.sh@20 -- # IFS=: 00:27:26.018 08:23:59 -- accel/accel.sh@20 -- # read -r var val 00:27:26.018 08:23:59 -- accel/accel.sh@21 -- # val=32 00:27:26.018 08:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:27:26.018 08:23:59 -- accel/accel.sh@20 -- # IFS=: 00:27:26.018 08:23:59 -- accel/accel.sh@20 -- # read -r var val 00:27:26.018 08:23:59 -- accel/accel.sh@21 -- # val=1 00:27:26.018 08:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:27:26.018 08:23:59 -- accel/accel.sh@20 -- # IFS=: 00:27:26.018 08:23:59 -- accel/accel.sh@20 -- # read -r var val 00:27:26.018 08:23:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:27:26.018 08:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:27:26.018 08:23:59 -- accel/accel.sh@20 -- # IFS=: 00:27:26.018 08:23:59 -- accel/accel.sh@20 -- # read -r var val 00:27:26.018 08:23:59 -- accel/accel.sh@21 -- # val=Yes 00:27:26.018 08:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:27:26.018 08:23:59 -- accel/accel.sh@20 -- # IFS=: 00:27:26.018 08:23:59 -- accel/accel.sh@20 -- # read -r var val 00:27:26.018 08:23:59 -- accel/accel.sh@21 -- # val= 00:27:26.018 08:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:27:26.018 08:23:59 -- accel/accel.sh@20 -- # IFS=: 00:27:26.018 08:23:59 -- accel/accel.sh@20 -- # read -r var val 00:27:26.018 08:23:59 -- accel/accel.sh@21 -- # val= 00:27:26.018 08:23:59 -- accel/accel.sh@22 -- # case "$var" in 00:27:26.018 08:23:59 -- accel/accel.sh@20 -- # IFS=: 00:27:26.018 08:23:59 -- accel/accel.sh@20 -- # read -r var val 00:27:27.427 08:24:00 -- accel/accel.sh@21 -- # val= 00:27:27.427 08:24:00 -- accel/accel.sh@22 -- # case "$var" in 00:27:27.427 08:24:00 -- accel/accel.sh@20 -- # IFS=: 00:27:27.427 08:24:00 -- accel/accel.sh@20 -- # read -r var val 00:27:27.427 08:24:00 -- accel/accel.sh@21 -- # val= 00:27:27.427 08:24:00 -- accel/accel.sh@22 -- # case "$var" in 00:27:27.427 08:24:00 -- accel/accel.sh@20 -- # IFS=: 00:27:27.427 08:24:00 -- accel/accel.sh@20 -- # read -r var val 00:27:27.427 08:24:00 -- accel/accel.sh@21 -- # val= 00:27:27.427 08:24:00 -- accel/accel.sh@22 -- # case "$var" in 00:27:27.427 08:24:00 -- accel/accel.sh@20 -- # IFS=: 00:27:27.427 08:24:00 -- accel/accel.sh@20 -- # read -r var val 00:27:27.427 08:24:00 -- accel/accel.sh@21 -- # val= 00:27:27.427 08:24:00 -- accel/accel.sh@22 -- # case "$var" in 00:27:27.427 08:24:00 -- accel/accel.sh@20 -- # IFS=: 00:27:27.427 08:24:00 -- accel/accel.sh@20 -- # read -r var val 00:27:27.427 08:24:00 -- accel/accel.sh@21 -- # val= 00:27:27.427 08:24:00 -- accel/accel.sh@22 -- # case "$var" in 00:27:27.427 08:24:00 -- accel/accel.sh@20 -- # IFS=: 00:27:27.427 08:24:00 -- accel/accel.sh@20 -- # read -r var val 00:27:27.427 08:24:00 -- accel/accel.sh@21 -- # val= 00:27:27.427 08:24:00 -- accel/accel.sh@22 -- # case "$var" in 00:27:27.427 08:24:00 -- accel/accel.sh@20 -- # IFS=: 00:27:27.427 08:24:00 -- accel/accel.sh@20 -- # read -r var val 00:27:27.427 08:24:00 -- accel/accel.sh@21 -- # val= 00:27:27.427 08:24:00 -- accel/accel.sh@22 -- # case "$var" in 00:27:27.427 08:24:00 -- accel/accel.sh@20 -- # IFS=: 00:27:27.427 08:24:00 -- accel/accel.sh@20 -- # read -r var val 00:27:27.427 08:24:00 -- accel/accel.sh@21 -- # val= 00:27:27.427 08:24:00 -- accel/accel.sh@22 -- # case "$var" in 00:27:27.427 08:24:00 -- accel/accel.sh@20 -- # IFS=: 00:27:27.427 08:24:00 -- accel/accel.sh@20 -- # read -r var val 00:27:27.427 08:24:00 -- accel/accel.sh@21 -- # val= 00:27:27.427 08:24:00 -- accel/accel.sh@22 -- # case "$var" in 00:27:27.427 08:24:00 -- accel/accel.sh@20 -- # IFS=: 00:27:27.427 08:24:00 -- accel/accel.sh@20 -- # read -r var val 00:27:27.427 08:24:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:27:27.427 08:24:00 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:27:27.427 08:24:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:27.427 ************************************ 00:27:27.427 END TEST accel_decomp_mcore 00:27:27.427 ************************************ 00:27:27.427 00:27:27.427 real 0m3.008s 00:27:27.427 user 0m9.252s 00:27:27.427 sys 0m0.227s 00:27:27.427 08:24:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:27.427 08:24:00 -- common/autotest_common.sh@10 -- # set +x 00:27:27.427 08:24:00 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:27:27.427 08:24:00 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:27:27.427 08:24:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:27.427 08:24:00 -- common/autotest_common.sh@10 -- # set +x 00:27:27.427 ************************************ 00:27:27.427 START TEST accel_decomp_full_mcore 00:27:27.427 ************************************ 00:27:27.427 08:24:00 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:27:27.427 08:24:00 -- accel/accel.sh@16 -- # local accel_opc 00:27:27.427 08:24:00 -- accel/accel.sh@17 -- # local accel_module 00:27:27.427 08:24:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:27:27.427 08:24:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:27:27.427 08:24:00 -- accel/accel.sh@12 -- # build_accel_config 00:27:27.427 08:24:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:27:27.427 08:24:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:27:27.427 08:24:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:27:27.427 08:24:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:27:27.427 08:24:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:27:27.427 08:24:00 -- accel/accel.sh@41 -- # local IFS=, 00:27:27.427 08:24:00 -- accel/accel.sh@42 -- # jq -r . 00:27:27.427 [2024-04-17 08:24:00.491714] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:27.427 [2024-04-17 08:24:00.491944] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59494 ] 00:27:27.427 [2024-04-17 08:24:00.636885] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:27.427 [2024-04-17 08:24:00.746478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:27.427 [2024-04-17 08:24:00.746669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:27.427 [2024-04-17 08:24:00.746734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.427 [2024-04-17 08:24:00.746730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:28.806 08:24:01 -- accel/accel.sh@18 -- # out='Preparing input file... 00:27:28.806 00:27:28.806 SPDK Configuration: 00:27:28.806 Core mask: 0xf 00:27:28.806 00:27:28.806 Accel Perf Configuration: 00:27:28.806 Workload Type: decompress 00:27:28.806 Transfer size: 111250 bytes 00:27:28.806 Vector count 1 00:27:28.806 Module: software 00:27:28.806 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:27:28.806 Queue depth: 32 00:27:28.806 Allocate depth: 32 00:27:28.806 # threads/core: 1 00:27:28.806 Run time: 1 seconds 00:27:28.806 Verify: Yes 00:27:28.806 00:27:28.806 Running for 1 seconds... 00:27:28.806 00:27:28.806 Core,Thread Transfers Bandwidth Failed Miscompares 00:27:28.806 ------------------------------------------------------------------------------------ 00:27:28.806 0,0 3328/s 137 MiB/s 0 0 00:27:28.806 3,0 3840/s 158 MiB/s 0 0 00:27:28.806 2,0 3872/s 159 MiB/s 0 0 00:27:28.806 1,0 3712/s 153 MiB/s 0 0 00:27:28.806 ==================================================================================== 00:27:28.806 Total 14752/s 1565 MiB/s 0 0' 00:27:28.806 08:24:01 -- accel/accel.sh@20 -- # IFS=: 00:27:28.806 08:24:01 -- accel/accel.sh@20 -- # read -r var val 00:27:28.806 08:24:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:27:28.806 08:24:01 -- accel/accel.sh@12 -- # build_accel_config 00:27:28.806 08:24:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:27:28.806 08:24:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:27:28.806 08:24:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:27:28.806 08:24:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:27:28.806 08:24:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:27:28.806 08:24:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:27:28.806 08:24:01 -- accel/accel.sh@41 -- # local IFS=, 00:27:28.806 08:24:01 -- accel/accel.sh@42 -- # jq -r . 00:27:28.806 [2024-04-17 08:24:02.020070] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:28.806 [2024-04-17 08:24:02.020245] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59516 ] 00:27:29.066 [2024-04-17 08:24:02.158009] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:29.066 [2024-04-17 08:24:02.264740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:29.066 [2024-04-17 08:24:02.264937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:29.066 [2024-04-17 08:24:02.265024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:29.066 [2024-04-17 08:24:02.265028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:29.066 08:24:02 -- accel/accel.sh@21 -- # val= 00:27:29.066 08:24:02 -- accel/accel.sh@22 -- # case "$var" in 00:27:29.066 08:24:02 -- accel/accel.sh@20 -- # IFS=: 00:27:29.066 08:24:02 -- accel/accel.sh@20 -- # read -r var val 00:27:29.066 08:24:02 -- accel/accel.sh@21 -- # val= 00:27:29.066 08:24:02 -- accel/accel.sh@22 -- # case "$var" in 00:27:29.066 08:24:02 -- accel/accel.sh@20 -- # IFS=: 00:27:29.066 08:24:02 -- accel/accel.sh@20 -- # read -r var val 00:27:29.066 08:24:02 -- accel/accel.sh@21 -- # val= 00:27:29.066 08:24:02 -- accel/accel.sh@22 -- # case "$var" in 00:27:29.066 08:24:02 -- accel/accel.sh@20 -- # IFS=: 00:27:29.066 08:24:02 -- accel/accel.sh@20 -- # read -r var val 00:27:29.066 08:24:02 -- accel/accel.sh@21 -- # val=0xf 00:27:29.066 08:24:02 -- accel/accel.sh@22 -- # case "$var" in 00:27:29.066 08:24:02 -- accel/accel.sh@20 -- # IFS=: 00:27:29.066 08:24:02 -- accel/accel.sh@20 -- # read -r var val 00:27:29.066 08:24:02 -- accel/accel.sh@21 -- # val= 00:27:29.066 08:24:02 -- accel/accel.sh@22 -- # case "$var" in 00:27:29.066 08:24:02 -- accel/accel.sh@20 -- # IFS=: 00:27:29.066 08:24:02 -- accel/accel.sh@20 -- # read -r var val 00:27:29.066 08:24:02 -- accel/accel.sh@21 -- # val= 00:27:29.066 08:24:02 -- accel/accel.sh@22 -- # case "$var" in 00:27:29.066 08:24:02 -- accel/accel.sh@20 -- # IFS=: 00:27:29.066 08:24:02 -- accel/accel.sh@20 -- # read -r var val 00:27:29.066 08:24:02 -- accel/accel.sh@21 -- # val=decompress 00:27:29.066 08:24:02 -- accel/accel.sh@22 -- # case "$var" in 00:27:29.066 08:24:02 -- accel/accel.sh@24 -- # accel_opc=decompress 00:27:29.066 08:24:02 -- accel/accel.sh@20 -- # IFS=: 00:27:29.066 08:24:02 -- accel/accel.sh@20 -- # read -r var val 00:27:29.066 08:24:02 -- accel/accel.sh@21 -- # val='111250 bytes' 00:27:29.066 08:24:02 -- accel/accel.sh@22 -- # case "$var" in 00:27:29.066 08:24:02 -- accel/accel.sh@20 -- # IFS=: 00:27:29.066 08:24:02 -- accel/accel.sh@20 -- # read -r var val 00:27:29.066 08:24:02 -- accel/accel.sh@21 -- # val= 00:27:29.066 08:24:02 -- accel/accel.sh@22 -- # case "$var" in 00:27:29.066 08:24:02 -- accel/accel.sh@20 -- # IFS=: 00:27:29.066 08:24:02 -- accel/accel.sh@20 -- # read -r var val 00:27:29.066 08:24:02 -- accel/accel.sh@21 -- # val=software 00:27:29.066 08:24:02 -- accel/accel.sh@22 -- # case "$var" in 00:27:29.066 08:24:02 -- accel/accel.sh@23 -- # accel_module=software 00:27:29.066 08:24:02 -- accel/accel.sh@20 -- # IFS=: 00:27:29.066 08:24:02 -- accel/accel.sh@20 -- # read -r var val 00:27:29.066 08:24:02 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:27:29.066 08:24:02 -- accel/accel.sh@22 -- # case "$var" in 00:27:29.066 08:24:02 -- accel/accel.sh@20 -- # IFS=: 00:27:29.066 08:24:02 -- accel/accel.sh@20 -- # read -r var val 00:27:29.066 08:24:02 -- accel/accel.sh@21 -- # val=32 00:27:29.066 08:24:02 -- accel/accel.sh@22 -- # case "$var" in 00:27:29.066 08:24:02 -- accel/accel.sh@20 -- # IFS=: 00:27:29.066 08:24:02 -- accel/accel.sh@20 -- # read -r var val 00:27:29.066 08:24:02 -- accel/accel.sh@21 -- # val=32 00:27:29.066 08:24:02 -- accel/accel.sh@22 -- # case "$var" in 00:27:29.066 08:24:02 -- accel/accel.sh@20 -- # IFS=: 00:27:29.066 08:24:02 -- accel/accel.sh@20 -- # read -r var val 00:27:29.066 08:24:02 -- accel/accel.sh@21 -- # val=1 00:27:29.066 08:24:02 -- accel/accel.sh@22 -- # case "$var" in 00:27:29.066 08:24:02 -- accel/accel.sh@20 -- # IFS=: 00:27:29.066 08:24:02 -- accel/accel.sh@20 -- # read -r var val 00:27:29.066 08:24:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:27:29.066 08:24:02 -- accel/accel.sh@22 -- # case "$var" in 00:27:29.066 08:24:02 -- accel/accel.sh@20 -- # IFS=: 00:27:29.066 08:24:02 -- accel/accel.sh@20 -- # read -r var val 00:27:29.066 08:24:02 -- accel/accel.sh@21 -- # val=Yes 00:27:29.066 08:24:02 -- accel/accel.sh@22 -- # case "$var" in 00:27:29.066 08:24:02 -- accel/accel.sh@20 -- # IFS=: 00:27:29.066 08:24:02 -- accel/accel.sh@20 -- # read -r var val 00:27:29.066 08:24:02 -- accel/accel.sh@21 -- # val= 00:27:29.066 08:24:02 -- accel/accel.sh@22 -- # case "$var" in 00:27:29.066 08:24:02 -- accel/accel.sh@20 -- # IFS=: 00:27:29.066 08:24:02 -- accel/accel.sh@20 -- # read -r var val 00:27:29.066 08:24:02 -- accel/accel.sh@21 -- # val= 00:27:29.066 08:24:02 -- accel/accel.sh@22 -- # case "$var" in 00:27:29.066 08:24:02 -- accel/accel.sh@20 -- # IFS=: 00:27:29.066 08:24:02 -- accel/accel.sh@20 -- # read -r var val 00:27:30.461 08:24:03 -- accel/accel.sh@21 -- # val= 00:27:30.461 08:24:03 -- accel/accel.sh@22 -- # case "$var" in 00:27:30.461 08:24:03 -- accel/accel.sh@20 -- # IFS=: 00:27:30.461 08:24:03 -- accel/accel.sh@20 -- # read -r var val 00:27:30.461 08:24:03 -- accel/accel.sh@21 -- # val= 00:27:30.461 08:24:03 -- accel/accel.sh@22 -- # case "$var" in 00:27:30.461 08:24:03 -- accel/accel.sh@20 -- # IFS=: 00:27:30.461 08:24:03 -- accel/accel.sh@20 -- # read -r var val 00:27:30.461 08:24:03 -- accel/accel.sh@21 -- # val= 00:27:30.461 08:24:03 -- accel/accel.sh@22 -- # case "$var" in 00:27:30.461 08:24:03 -- accel/accel.sh@20 -- # IFS=: 00:27:30.461 08:24:03 -- accel/accel.sh@20 -- # read -r var val 00:27:30.461 08:24:03 -- accel/accel.sh@21 -- # val= 00:27:30.461 08:24:03 -- accel/accel.sh@22 -- # case "$var" in 00:27:30.461 08:24:03 -- accel/accel.sh@20 -- # IFS=: 00:27:30.461 08:24:03 -- accel/accel.sh@20 -- # read -r var val 00:27:30.461 08:24:03 -- accel/accel.sh@21 -- # val= 00:27:30.461 08:24:03 -- accel/accel.sh@22 -- # case "$var" in 00:27:30.461 08:24:03 -- accel/accel.sh@20 -- # IFS=: 00:27:30.461 08:24:03 -- accel/accel.sh@20 -- # read -r var val 00:27:30.461 08:24:03 -- accel/accel.sh@21 -- # val= 00:27:30.461 08:24:03 -- accel/accel.sh@22 -- # case "$var" in 00:27:30.461 08:24:03 -- accel/accel.sh@20 -- # IFS=: 00:27:30.461 08:24:03 -- accel/accel.sh@20 -- # read -r var val 00:27:30.461 08:24:03 -- accel/accel.sh@21 -- # val= 00:27:30.461 08:24:03 -- accel/accel.sh@22 -- # case "$var" in 00:27:30.461 08:24:03 -- accel/accel.sh@20 -- # IFS=: 00:27:30.461 08:24:03 -- accel/accel.sh@20 -- # read -r var val 00:27:30.461 08:24:03 -- accel/accel.sh@21 -- # val= 00:27:30.461 08:24:03 -- accel/accel.sh@22 -- # case "$var" in 00:27:30.461 08:24:03 -- accel/accel.sh@20 -- # IFS=: 00:27:30.461 08:24:03 -- accel/accel.sh@20 -- # read -r var val 00:27:30.461 08:24:03 -- accel/accel.sh@21 -- # val= 00:27:30.461 08:24:03 -- accel/accel.sh@22 -- # case "$var" in 00:27:30.461 08:24:03 -- accel/accel.sh@20 -- # IFS=: 00:27:30.461 08:24:03 -- accel/accel.sh@20 -- # read -r var val 00:27:30.461 08:24:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:27:30.461 08:24:03 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:27:30.461 ************************************ 00:27:30.461 END TEST accel_decomp_full_mcore 00:27:30.461 ************************************ 00:27:30.461 08:24:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:30.461 00:27:30.461 real 0m3.036s 00:27:30.461 user 0m9.376s 00:27:30.461 sys 0m0.218s 00:27:30.461 08:24:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:30.461 08:24:03 -- common/autotest_common.sh@10 -- # set +x 00:27:30.461 08:24:03 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:27:30.461 08:24:03 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:27:30.461 08:24:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:30.461 08:24:03 -- common/autotest_common.sh@10 -- # set +x 00:27:30.461 ************************************ 00:27:30.461 START TEST accel_decomp_mthread 00:27:30.461 ************************************ 00:27:30.461 08:24:03 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:27:30.461 08:24:03 -- accel/accel.sh@16 -- # local accel_opc 00:27:30.461 08:24:03 -- accel/accel.sh@17 -- # local accel_module 00:27:30.461 08:24:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:27:30.461 08:24:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:27:30.461 08:24:03 -- accel/accel.sh@12 -- # build_accel_config 00:27:30.461 08:24:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:27:30.461 08:24:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:27:30.461 08:24:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:27:30.461 08:24:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:27:30.461 08:24:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:27:30.461 08:24:03 -- accel/accel.sh@41 -- # local IFS=, 00:27:30.461 08:24:03 -- accel/accel.sh@42 -- # jq -r . 00:27:30.461 [2024-04-17 08:24:03.598247] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:30.461 [2024-04-17 08:24:03.598346] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59554 ] 00:27:30.461 [2024-04-17 08:24:03.737836] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.720 [2024-04-17 08:24:03.838340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.097 08:24:05 -- accel/accel.sh@18 -- # out='Preparing input file... 00:27:32.097 00:27:32.097 SPDK Configuration: 00:27:32.097 Core mask: 0x1 00:27:32.097 00:27:32.097 Accel Perf Configuration: 00:27:32.097 Workload Type: decompress 00:27:32.097 Transfer size: 4096 bytes 00:27:32.097 Vector count 1 00:27:32.097 Module: software 00:27:32.097 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:27:32.097 Queue depth: 32 00:27:32.097 Allocate depth: 32 00:27:32.097 # threads/core: 2 00:27:32.097 Run time: 1 seconds 00:27:32.097 Verify: Yes 00:27:32.097 00:27:32.097 Running for 1 seconds... 00:27:32.097 00:27:32.097 Core,Thread Transfers Bandwidth Failed Miscompares 00:27:32.097 ------------------------------------------------------------------------------------ 00:27:32.097 0,1 30560/s 56 MiB/s 0 0 00:27:32.097 0,0 30432/s 56 MiB/s 0 0 00:27:32.097 ==================================================================================== 00:27:32.097 Total 60992/s 238 MiB/s 0 0' 00:27:32.097 08:24:05 -- accel/accel.sh@20 -- # IFS=: 00:27:32.097 08:24:05 -- accel/accel.sh@20 -- # read -r var val 00:27:32.097 08:24:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:27:32.097 08:24:05 -- accel/accel.sh@12 -- # build_accel_config 00:27:32.097 08:24:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:27:32.097 08:24:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:27:32.097 08:24:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:27:32.097 08:24:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:27:32.097 08:24:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:27:32.097 08:24:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:27:32.097 08:24:05 -- accel/accel.sh@41 -- # local IFS=, 00:27:32.097 08:24:05 -- accel/accel.sh@42 -- # jq -r . 00:27:32.097 [2024-04-17 08:24:05.091753] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:32.097 [2024-04-17 08:24:05.091833] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59574 ] 00:27:32.097 [2024-04-17 08:24:05.232352] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.097 [2024-04-17 08:24:05.336741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.097 08:24:05 -- accel/accel.sh@21 -- # val= 00:27:32.097 08:24:05 -- accel/accel.sh@22 -- # case "$var" in 00:27:32.097 08:24:05 -- accel/accel.sh@20 -- # IFS=: 00:27:32.097 08:24:05 -- accel/accel.sh@20 -- # read -r var val 00:27:32.097 08:24:05 -- accel/accel.sh@21 -- # val= 00:27:32.097 08:24:05 -- accel/accel.sh@22 -- # case "$var" in 00:27:32.097 08:24:05 -- accel/accel.sh@20 -- # IFS=: 00:27:32.097 08:24:05 -- accel/accel.sh@20 -- # read -r var val 00:27:32.097 08:24:05 -- accel/accel.sh@21 -- # val= 00:27:32.097 08:24:05 -- accel/accel.sh@22 -- # case "$var" in 00:27:32.097 08:24:05 -- accel/accel.sh@20 -- # IFS=: 00:27:32.097 08:24:05 -- accel/accel.sh@20 -- # read -r var val 00:27:32.097 08:24:05 -- accel/accel.sh@21 -- # val=0x1 00:27:32.097 08:24:05 -- accel/accel.sh@22 -- # case "$var" in 00:27:32.097 08:24:05 -- accel/accel.sh@20 -- # IFS=: 00:27:32.097 08:24:05 -- accel/accel.sh@20 -- # read -r var val 00:27:32.097 08:24:05 -- accel/accel.sh@21 -- # val= 00:27:32.097 08:24:05 -- accel/accel.sh@22 -- # case "$var" in 00:27:32.097 08:24:05 -- accel/accel.sh@20 -- # IFS=: 00:27:32.097 08:24:05 -- accel/accel.sh@20 -- # read -r var val 00:27:32.097 08:24:05 -- accel/accel.sh@21 -- # val= 00:27:32.097 08:24:05 -- accel/accel.sh@22 -- # case "$var" in 00:27:32.097 08:24:05 -- accel/accel.sh@20 -- # IFS=: 00:27:32.097 08:24:05 -- accel/accel.sh@20 -- # read -r var val 00:27:32.097 08:24:05 -- accel/accel.sh@21 -- # val=decompress 00:27:32.097 08:24:05 -- accel/accel.sh@22 -- # case "$var" in 00:27:32.097 08:24:05 -- accel/accel.sh@24 -- # accel_opc=decompress 00:27:32.097 08:24:05 -- accel/accel.sh@20 -- # IFS=: 00:27:32.097 08:24:05 -- accel/accel.sh@20 -- # read -r var val 00:27:32.097 08:24:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:27:32.097 08:24:05 -- accel/accel.sh@22 -- # case "$var" in 00:27:32.097 08:24:05 -- accel/accel.sh@20 -- # IFS=: 00:27:32.097 08:24:05 -- accel/accel.sh@20 -- # read -r var val 00:27:32.097 08:24:05 -- accel/accel.sh@21 -- # val= 00:27:32.097 08:24:05 -- accel/accel.sh@22 -- # case "$var" in 00:27:32.097 08:24:05 -- accel/accel.sh@20 -- # IFS=: 00:27:32.097 08:24:05 -- accel/accel.sh@20 -- # read -r var val 00:27:32.097 08:24:05 -- accel/accel.sh@21 -- # val=software 00:27:32.097 08:24:05 -- accel/accel.sh@22 -- # case "$var" in 00:27:32.097 08:24:05 -- accel/accel.sh@23 -- # accel_module=software 00:27:32.097 08:24:05 -- accel/accel.sh@20 -- # IFS=: 00:27:32.097 08:24:05 -- accel/accel.sh@20 -- # read -r var val 00:27:32.097 08:24:05 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:27:32.097 08:24:05 -- accel/accel.sh@22 -- # case "$var" in 00:27:32.098 08:24:05 -- accel/accel.sh@20 -- # IFS=: 00:27:32.098 08:24:05 -- accel/accel.sh@20 -- # read -r var val 00:27:32.098 08:24:05 -- accel/accel.sh@21 -- # val=32 00:27:32.098 08:24:05 -- accel/accel.sh@22 -- # case "$var" in 00:27:32.098 08:24:05 -- accel/accel.sh@20 -- # IFS=: 00:27:32.098 08:24:05 -- accel/accel.sh@20 -- # read -r var val 00:27:32.098 08:24:05 -- accel/accel.sh@21 -- # val=32 00:27:32.098 08:24:05 -- accel/accel.sh@22 -- # case "$var" in 00:27:32.098 08:24:05 -- accel/accel.sh@20 -- # IFS=: 00:27:32.098 08:24:05 -- accel/accel.sh@20 -- # read -r var val 00:27:32.098 08:24:05 -- accel/accel.sh@21 -- # val=2 00:27:32.098 08:24:05 -- accel/accel.sh@22 -- # case "$var" in 00:27:32.098 08:24:05 -- accel/accel.sh@20 -- # IFS=: 00:27:32.098 08:24:05 -- accel/accel.sh@20 -- # read -r var val 00:27:32.098 08:24:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:27:32.098 08:24:05 -- accel/accel.sh@22 -- # case "$var" in 00:27:32.098 08:24:05 -- accel/accel.sh@20 -- # IFS=: 00:27:32.098 08:24:05 -- accel/accel.sh@20 -- # read -r var val 00:27:32.098 08:24:05 -- accel/accel.sh@21 -- # val=Yes 00:27:32.098 08:24:05 -- accel/accel.sh@22 -- # case "$var" in 00:27:32.098 08:24:05 -- accel/accel.sh@20 -- # IFS=: 00:27:32.098 08:24:05 -- accel/accel.sh@20 -- # read -r var val 00:27:32.098 08:24:05 -- accel/accel.sh@21 -- # val= 00:27:32.098 08:24:05 -- accel/accel.sh@22 -- # case "$var" in 00:27:32.098 08:24:05 -- accel/accel.sh@20 -- # IFS=: 00:27:32.098 08:24:05 -- accel/accel.sh@20 -- # read -r var val 00:27:32.098 08:24:05 -- accel/accel.sh@21 -- # val= 00:27:32.098 08:24:05 -- accel/accel.sh@22 -- # case "$var" in 00:27:32.098 08:24:05 -- accel/accel.sh@20 -- # IFS=: 00:27:32.098 08:24:05 -- accel/accel.sh@20 -- # read -r var val 00:27:33.491 08:24:06 -- accel/accel.sh@21 -- # val= 00:27:33.491 08:24:06 -- accel/accel.sh@22 -- # case "$var" in 00:27:33.491 08:24:06 -- accel/accel.sh@20 -- # IFS=: 00:27:33.491 08:24:06 -- accel/accel.sh@20 -- # read -r var val 00:27:33.491 08:24:06 -- accel/accel.sh@21 -- # val= 00:27:33.491 08:24:06 -- accel/accel.sh@22 -- # case "$var" in 00:27:33.491 08:24:06 -- accel/accel.sh@20 -- # IFS=: 00:27:33.491 08:24:06 -- accel/accel.sh@20 -- # read -r var val 00:27:33.491 08:24:06 -- accel/accel.sh@21 -- # val= 00:27:33.491 08:24:06 -- accel/accel.sh@22 -- # case "$var" in 00:27:33.491 08:24:06 -- accel/accel.sh@20 -- # IFS=: 00:27:33.491 08:24:06 -- accel/accel.sh@20 -- # read -r var val 00:27:33.491 08:24:06 -- accel/accel.sh@21 -- # val= 00:27:33.492 08:24:06 -- accel/accel.sh@22 -- # case "$var" in 00:27:33.492 08:24:06 -- accel/accel.sh@20 -- # IFS=: 00:27:33.492 08:24:06 -- accel/accel.sh@20 -- # read -r var val 00:27:33.492 08:24:06 -- accel/accel.sh@21 -- # val= 00:27:33.492 08:24:06 -- accel/accel.sh@22 -- # case "$var" in 00:27:33.492 08:24:06 -- accel/accel.sh@20 -- # IFS=: 00:27:33.492 08:24:06 -- accel/accel.sh@20 -- # read -r var val 00:27:33.492 08:24:06 -- accel/accel.sh@21 -- # val= 00:27:33.492 08:24:06 -- accel/accel.sh@22 -- # case "$var" in 00:27:33.492 08:24:06 -- accel/accel.sh@20 -- # IFS=: 00:27:33.492 08:24:06 -- accel/accel.sh@20 -- # read -r var val 00:27:33.492 08:24:06 -- accel/accel.sh@21 -- # val= 00:27:33.492 ************************************ 00:27:33.492 END TEST accel_decomp_mthread 00:27:33.492 ************************************ 00:27:33.492 08:24:06 -- accel/accel.sh@22 -- # case "$var" in 00:27:33.492 08:24:06 -- accel/accel.sh@20 -- # IFS=: 00:27:33.492 08:24:06 -- accel/accel.sh@20 -- # read -r var val 00:27:33.492 08:24:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:27:33.492 08:24:06 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:27:33.492 08:24:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:33.492 00:27:33.492 real 0m2.997s 00:27:33.492 user 0m2.590s 00:27:33.492 sys 0m0.201s 00:27:33.492 08:24:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:33.492 08:24:06 -- common/autotest_common.sh@10 -- # set +x 00:27:33.492 08:24:06 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:27:33.492 08:24:06 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:27:33.492 08:24:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:33.492 08:24:06 -- common/autotest_common.sh@10 -- # set +x 00:27:33.492 ************************************ 00:27:33.492 START TEST accel_deomp_full_mthread 00:27:33.492 ************************************ 00:27:33.492 08:24:06 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:27:33.492 08:24:06 -- accel/accel.sh@16 -- # local accel_opc 00:27:33.492 08:24:06 -- accel/accel.sh@17 -- # local accel_module 00:27:33.492 08:24:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:27:33.492 08:24:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:27:33.492 08:24:06 -- accel/accel.sh@12 -- # build_accel_config 00:27:33.492 08:24:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:27:33.492 08:24:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:27:33.492 08:24:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:27:33.492 08:24:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:27:33.492 08:24:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:27:33.492 08:24:06 -- accel/accel.sh@41 -- # local IFS=, 00:27:33.492 08:24:06 -- accel/accel.sh@42 -- # jq -r . 00:27:33.492 [2024-04-17 08:24:06.655892] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:33.492 [2024-04-17 08:24:06.656074] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59608 ] 00:27:33.492 [2024-04-17 08:24:06.788067] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.756 [2024-04-17 08:24:06.894463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.133 08:24:08 -- accel/accel.sh@18 -- # out='Preparing input file... 00:27:35.134 00:27:35.134 SPDK Configuration: 00:27:35.134 Core mask: 0x1 00:27:35.134 00:27:35.134 Accel Perf Configuration: 00:27:35.134 Workload Type: decompress 00:27:35.134 Transfer size: 111250 bytes 00:27:35.134 Vector count 1 00:27:35.134 Module: software 00:27:35.134 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:27:35.134 Queue depth: 32 00:27:35.134 Allocate depth: 32 00:27:35.134 # threads/core: 2 00:27:35.134 Run time: 1 seconds 00:27:35.134 Verify: Yes 00:27:35.134 00:27:35.134 Running for 1 seconds... 00:27:35.134 00:27:35.134 Core,Thread Transfers Bandwidth Failed Miscompares 00:27:35.134 ------------------------------------------------------------------------------------ 00:27:35.134 0,1 1856/s 76 MiB/s 0 0 00:27:35.134 0,0 1824/s 75 MiB/s 0 0 00:27:35.134 ==================================================================================== 00:27:35.134 Total 3680/s 390 MiB/s 0 0' 00:27:35.134 08:24:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:27:35.134 08:24:08 -- accel/accel.sh@20 -- # IFS=: 00:27:35.134 08:24:08 -- accel/accel.sh@20 -- # read -r var val 00:27:35.134 08:24:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:27:35.134 08:24:08 -- accel/accel.sh@12 -- # build_accel_config 00:27:35.134 08:24:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:27:35.134 08:24:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:27:35.134 08:24:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:27:35.134 08:24:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:27:35.134 08:24:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:27:35.134 08:24:08 -- accel/accel.sh@41 -- # local IFS=, 00:27:35.134 08:24:08 -- accel/accel.sh@42 -- # jq -r . 00:27:35.134 [2024-04-17 08:24:08.159968] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:35.134 [2024-04-17 08:24:08.160066] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59631 ] 00:27:35.134 [2024-04-17 08:24:08.296316] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.134 [2024-04-17 08:24:08.400997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.134 08:24:08 -- accel/accel.sh@21 -- # val= 00:27:35.134 08:24:08 -- accel/accel.sh@22 -- # case "$var" in 00:27:35.134 08:24:08 -- accel/accel.sh@20 -- # IFS=: 00:27:35.134 08:24:08 -- accel/accel.sh@20 -- # read -r var val 00:27:35.134 08:24:08 -- accel/accel.sh@21 -- # val= 00:27:35.134 08:24:08 -- accel/accel.sh@22 -- # case "$var" in 00:27:35.134 08:24:08 -- accel/accel.sh@20 -- # IFS=: 00:27:35.134 08:24:08 -- accel/accel.sh@20 -- # read -r var val 00:27:35.134 08:24:08 -- accel/accel.sh@21 -- # val= 00:27:35.134 08:24:08 -- accel/accel.sh@22 -- # case "$var" in 00:27:35.134 08:24:08 -- accel/accel.sh@20 -- # IFS=: 00:27:35.134 08:24:08 -- accel/accel.sh@20 -- # read -r var val 00:27:35.134 08:24:08 -- accel/accel.sh@21 -- # val=0x1 00:27:35.134 08:24:08 -- accel/accel.sh@22 -- # case "$var" in 00:27:35.134 08:24:08 -- accel/accel.sh@20 -- # IFS=: 00:27:35.134 08:24:08 -- accel/accel.sh@20 -- # read -r var val 00:27:35.134 08:24:08 -- accel/accel.sh@21 -- # val= 00:27:35.134 08:24:08 -- accel/accel.sh@22 -- # case "$var" in 00:27:35.134 08:24:08 -- accel/accel.sh@20 -- # IFS=: 00:27:35.134 08:24:08 -- accel/accel.sh@20 -- # read -r var val 00:27:35.134 08:24:08 -- accel/accel.sh@21 -- # val= 00:27:35.134 08:24:08 -- accel/accel.sh@22 -- # case "$var" in 00:27:35.134 08:24:08 -- accel/accel.sh@20 -- # IFS=: 00:27:35.134 08:24:08 -- accel/accel.sh@20 -- # read -r var val 00:27:35.134 08:24:08 -- accel/accel.sh@21 -- # val=decompress 00:27:35.134 08:24:08 -- accel/accel.sh@22 -- # case "$var" in 00:27:35.134 08:24:08 -- accel/accel.sh@24 -- # accel_opc=decompress 00:27:35.134 08:24:08 -- accel/accel.sh@20 -- # IFS=: 00:27:35.134 08:24:08 -- accel/accel.sh@20 -- # read -r var val 00:27:35.134 08:24:08 -- accel/accel.sh@21 -- # val='111250 bytes' 00:27:35.134 08:24:08 -- accel/accel.sh@22 -- # case "$var" in 00:27:35.134 08:24:08 -- accel/accel.sh@20 -- # IFS=: 00:27:35.134 08:24:08 -- accel/accel.sh@20 -- # read -r var val 00:27:35.134 08:24:08 -- accel/accel.sh@21 -- # val= 00:27:35.134 08:24:08 -- accel/accel.sh@22 -- # case "$var" in 00:27:35.134 08:24:08 -- accel/accel.sh@20 -- # IFS=: 00:27:35.134 08:24:08 -- accel/accel.sh@20 -- # read -r var val 00:27:35.134 08:24:08 -- accel/accel.sh@21 -- # val=software 00:27:35.134 08:24:08 -- accel/accel.sh@22 -- # case "$var" in 00:27:35.134 08:24:08 -- accel/accel.sh@23 -- # accel_module=software 00:27:35.134 08:24:08 -- accel/accel.sh@20 -- # IFS=: 00:27:35.134 08:24:08 -- accel/accel.sh@20 -- # read -r var val 00:27:35.134 08:24:08 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:27:35.134 08:24:08 -- accel/accel.sh@22 -- # case "$var" in 00:27:35.134 08:24:08 -- accel/accel.sh@20 -- # IFS=: 00:27:35.134 08:24:08 -- accel/accel.sh@20 -- # read -r var val 00:27:35.134 08:24:08 -- accel/accel.sh@21 -- # val=32 00:27:35.134 08:24:08 -- accel/accel.sh@22 -- # case "$var" in 00:27:35.134 08:24:08 -- accel/accel.sh@20 -- # IFS=: 00:27:35.134 08:24:08 -- accel/accel.sh@20 -- # read -r var val 00:27:35.134 08:24:08 -- accel/accel.sh@21 -- # val=32 00:27:35.134 08:24:08 -- accel/accel.sh@22 -- # case "$var" in 00:27:35.134 08:24:08 -- accel/accel.sh@20 -- # IFS=: 00:27:35.134 08:24:08 -- accel/accel.sh@20 -- # read -r var val 00:27:35.134 08:24:08 -- accel/accel.sh@21 -- # val=2 00:27:35.134 08:24:08 -- accel/accel.sh@22 -- # case "$var" in 00:27:35.134 08:24:08 -- accel/accel.sh@20 -- # IFS=: 00:27:35.134 08:24:08 -- accel/accel.sh@20 -- # read -r var val 00:27:35.134 08:24:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:27:35.134 08:24:08 -- accel/accel.sh@22 -- # case "$var" in 00:27:35.134 08:24:08 -- accel/accel.sh@20 -- # IFS=: 00:27:35.134 08:24:08 -- accel/accel.sh@20 -- # read -r var val 00:27:35.134 08:24:08 -- accel/accel.sh@21 -- # val=Yes 00:27:35.134 08:24:08 -- accel/accel.sh@22 -- # case "$var" in 00:27:35.134 08:24:08 -- accel/accel.sh@20 -- # IFS=: 00:27:35.134 08:24:08 -- accel/accel.sh@20 -- # read -r var val 00:27:35.134 08:24:08 -- accel/accel.sh@21 -- # val= 00:27:35.134 08:24:08 -- accel/accel.sh@22 -- # case "$var" in 00:27:35.134 08:24:08 -- accel/accel.sh@20 -- # IFS=: 00:27:35.134 08:24:08 -- accel/accel.sh@20 -- # read -r var val 00:27:35.134 08:24:08 -- accel/accel.sh@21 -- # val= 00:27:35.134 08:24:08 -- accel/accel.sh@22 -- # case "$var" in 00:27:35.134 08:24:08 -- accel/accel.sh@20 -- # IFS=: 00:27:35.134 08:24:08 -- accel/accel.sh@20 -- # read -r var val 00:27:36.514 08:24:09 -- accel/accel.sh@21 -- # val= 00:27:36.514 08:24:09 -- accel/accel.sh@22 -- # case "$var" in 00:27:36.514 08:24:09 -- accel/accel.sh@20 -- # IFS=: 00:27:36.514 08:24:09 -- accel/accel.sh@20 -- # read -r var val 00:27:36.514 08:24:09 -- accel/accel.sh@21 -- # val= 00:27:36.514 08:24:09 -- accel/accel.sh@22 -- # case "$var" in 00:27:36.514 08:24:09 -- accel/accel.sh@20 -- # IFS=: 00:27:36.514 08:24:09 -- accel/accel.sh@20 -- # read -r var val 00:27:36.514 08:24:09 -- accel/accel.sh@21 -- # val= 00:27:36.514 08:24:09 -- accel/accel.sh@22 -- # case "$var" in 00:27:36.514 08:24:09 -- accel/accel.sh@20 -- # IFS=: 00:27:36.514 08:24:09 -- accel/accel.sh@20 -- # read -r var val 00:27:36.514 08:24:09 -- accel/accel.sh@21 -- # val= 00:27:36.514 08:24:09 -- accel/accel.sh@22 -- # case "$var" in 00:27:36.514 08:24:09 -- accel/accel.sh@20 -- # IFS=: 00:27:36.514 08:24:09 -- accel/accel.sh@20 -- # read -r var val 00:27:36.514 08:24:09 -- accel/accel.sh@21 -- # val= 00:27:36.514 08:24:09 -- accel/accel.sh@22 -- # case "$var" in 00:27:36.514 08:24:09 -- accel/accel.sh@20 -- # IFS=: 00:27:36.514 08:24:09 -- accel/accel.sh@20 -- # read -r var val 00:27:36.514 08:24:09 -- accel/accel.sh@21 -- # val= 00:27:36.514 08:24:09 -- accel/accel.sh@22 -- # case "$var" in 00:27:36.514 08:24:09 -- accel/accel.sh@20 -- # IFS=: 00:27:36.514 08:24:09 -- accel/accel.sh@20 -- # read -r var val 00:27:36.514 08:24:09 -- accel/accel.sh@21 -- # val= 00:27:36.514 08:24:09 -- accel/accel.sh@22 -- # case "$var" in 00:27:36.514 08:24:09 -- accel/accel.sh@20 -- # IFS=: 00:27:36.514 08:24:09 -- accel/accel.sh@20 -- # read -r var val 00:27:36.514 08:24:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:27:36.514 08:24:09 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:27:36.514 08:24:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:36.514 00:27:36.514 real 0m3.028s 00:27:36.514 user 0m2.634s 00:27:36.514 sys 0m0.193s 00:27:36.514 08:24:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:36.514 08:24:09 -- common/autotest_common.sh@10 -- # set +x 00:27:36.514 ************************************ 00:27:36.514 END TEST accel_deomp_full_mthread 00:27:36.514 ************************************ 00:27:36.514 08:24:09 -- accel/accel.sh@116 -- # [[ n == y ]] 00:27:36.514 08:24:09 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:27:36.514 08:24:09 -- accel/accel.sh@129 -- # build_accel_config 00:27:36.514 08:24:09 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:27:36.514 08:24:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:27:36.515 08:24:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:36.515 08:24:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:27:36.515 08:24:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:27:36.515 08:24:09 -- common/autotest_common.sh@10 -- # set +x 00:27:36.515 08:24:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:27:36.515 08:24:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:27:36.515 08:24:09 -- accel/accel.sh@41 -- # local IFS=, 00:27:36.515 08:24:09 -- accel/accel.sh@42 -- # jq -r . 00:27:36.515 ************************************ 00:27:36.515 START TEST accel_dif_functional_tests 00:27:36.515 ************************************ 00:27:36.515 08:24:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:27:36.515 [2024-04-17 08:24:09.759337] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:36.515 [2024-04-17 08:24:09.759429] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59663 ] 00:27:36.774 [2024-04-17 08:24:09.899329] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:36.774 [2024-04-17 08:24:10.003082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:36.774 [2024-04-17 08:24:10.003319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:36.774 [2024-04-17 08:24:10.003331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.774 00:27:36.774 00:27:36.774 CUnit - A unit testing framework for C - Version 2.1-3 00:27:36.774 http://cunit.sourceforge.net/ 00:27:36.774 00:27:36.774 00:27:36.774 Suite: accel_dif 00:27:36.774 Test: verify: DIF generated, GUARD check ...passed 00:27:36.774 Test: verify: DIF generated, APPTAG check ...passed 00:27:36.774 Test: verify: DIF generated, REFTAG check ...passed 00:27:36.774 Test: verify: DIF not generated, GUARD check ...passed 00:27:36.774 Test: verify: DIF not generated, APPTAG check ...[2024-04-17 08:24:10.078189] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:27:36.774 [2024-04-17 08:24:10.078248] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:27:36.774 [2024-04-17 08:24:10.078283] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:27:36.774 [2024-04-17 08:24:10.078301] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:27:36.774 passed 00:27:36.774 Test: verify: DIF not generated, REFTAG check ...passed 00:27:36.774 Test: verify: APPTAG correct, APPTAG check ...[2024-04-17 08:24:10.078319] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:27:36.774 [2024-04-17 08:24:10.078337] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:27:36.775 passed 00:27:36.775 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-17 08:24:10.078428] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:27:36.775 passed 00:27:36.775 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:27:36.775 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:27:36.775 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:27:36.775 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:27:36.775 Test: generate copy: DIF generated, GUARD check ...[2024-04-17 08:24:10.078592] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:27:36.775 passed 00:27:36.775 Test: generate copy: DIF generated, APTTAG check ...passed 00:27:36.775 Test: generate copy: DIF generated, REFTAG check ...passed 00:27:36.775 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:27:36.775 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:27:36.775 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:27:36.775 Test: generate copy: iovecs-len validate ...passed 00:27:36.775 Test: generate copy: buffer alignment validate ...[2024-04-17 08:24:10.078919] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:27:36.775 passed 00:27:36.775 00:27:36.775 Run Summary: Type Total Ran Passed Failed Inactive 00:27:36.775 suites 1 1 n/a 0 0 00:27:36.775 tests 20 20 20 0 0 00:27:36.775 asserts 204 204 204 0 n/a 00:27:36.775 00:27:36.775 Elapsed time = 0.002 seconds 00:27:37.033 00:27:37.033 real 0m0.577s 00:27:37.033 user 0m0.724s 00:27:37.033 sys 0m0.127s 00:27:37.033 08:24:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:37.033 08:24:10 -- common/autotest_common.sh@10 -- # set +x 00:27:37.033 ************************************ 00:27:37.033 END TEST accel_dif_functional_tests 00:27:37.033 ************************************ 00:27:37.033 00:27:37.033 real 1m4.345s 00:27:37.033 user 1m8.742s 00:27:37.033 sys 0m5.713s 00:27:37.033 08:24:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:37.033 08:24:10 -- common/autotest_common.sh@10 -- # set +x 00:27:37.033 ************************************ 00:27:37.033 END TEST accel 00:27:37.033 ************************************ 00:27:37.292 08:24:10 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:27:37.292 08:24:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:37.292 08:24:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:37.292 08:24:10 -- common/autotest_common.sh@10 -- # set +x 00:27:37.292 ************************************ 00:27:37.292 START TEST accel_rpc 00:27:37.292 ************************************ 00:27:37.292 08:24:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:27:37.292 * Looking for test storage... 00:27:37.292 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:27:37.292 08:24:10 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:27:37.292 08:24:10 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=59732 00:27:37.292 08:24:10 -- accel/accel_rpc.sh@15 -- # waitforlisten 59732 00:27:37.292 08:24:10 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:27:37.292 08:24:10 -- common/autotest_common.sh@819 -- # '[' -z 59732 ']' 00:27:37.292 08:24:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:37.292 08:24:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:37.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:37.292 08:24:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:37.292 08:24:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:37.292 08:24:10 -- common/autotest_common.sh@10 -- # set +x 00:27:37.292 [2024-04-17 08:24:10.574724] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:37.292 [2024-04-17 08:24:10.574800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59732 ] 00:27:37.551 [2024-04-17 08:24:10.712659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.551 [2024-04-17 08:24:10.811663] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:37.551 [2024-04-17 08:24:10.811810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.119 08:24:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:38.119 08:24:11 -- common/autotest_common.sh@852 -- # return 0 00:27:38.119 08:24:11 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:27:38.119 08:24:11 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:27:38.119 08:24:11 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:27:38.119 08:24:11 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:27:38.119 08:24:11 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:27:38.119 08:24:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:38.119 08:24:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:38.119 08:24:11 -- common/autotest_common.sh@10 -- # set +x 00:27:38.119 ************************************ 00:27:38.119 START TEST accel_assign_opcode 00:27:38.119 ************************************ 00:27:38.119 08:24:11 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:27:38.119 08:24:11 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:27:38.119 08:24:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:38.119 08:24:11 -- common/autotest_common.sh@10 -- # set +x 00:27:38.119 [2024-04-17 08:24:11.447012] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:27:38.377 08:24:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:38.377 08:24:11 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:27:38.377 08:24:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:38.377 08:24:11 -- common/autotest_common.sh@10 -- # set +x 00:27:38.377 [2024-04-17 08:24:11.454968] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:27:38.377 08:24:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:38.377 08:24:11 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:27:38.377 08:24:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:38.377 08:24:11 -- common/autotest_common.sh@10 -- # set +x 00:27:38.377 08:24:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:38.377 08:24:11 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:27:38.377 08:24:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:38.377 08:24:11 -- common/autotest_common.sh@10 -- # set +x 00:27:38.377 08:24:11 -- accel/accel_rpc.sh@42 -- # grep software 00:27:38.377 08:24:11 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:27:38.377 08:24:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:38.377 software 00:27:38.377 00:27:38.377 real 0m0.262s 00:27:38.377 user 0m0.043s 00:27:38.377 sys 0m0.010s 00:27:38.377 08:24:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:38.377 08:24:11 -- common/autotest_common.sh@10 -- # set +x 00:27:38.377 ************************************ 00:27:38.377 END TEST accel_assign_opcode 00:27:38.377 ************************************ 00:27:38.636 08:24:11 -- accel/accel_rpc.sh@55 -- # killprocess 59732 00:27:38.636 08:24:11 -- common/autotest_common.sh@926 -- # '[' -z 59732 ']' 00:27:38.636 08:24:11 -- common/autotest_common.sh@930 -- # kill -0 59732 00:27:38.636 08:24:11 -- common/autotest_common.sh@931 -- # uname 00:27:38.636 08:24:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:38.636 08:24:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 59732 00:27:38.636 08:24:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:38.636 08:24:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:38.636 killing process with pid 59732 00:27:38.636 08:24:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 59732' 00:27:38.636 08:24:11 -- common/autotest_common.sh@945 -- # kill 59732 00:27:38.636 08:24:11 -- common/autotest_common.sh@950 -- # wait 59732 00:27:38.894 00:27:38.894 real 0m1.754s 00:27:38.894 user 0m1.731s 00:27:38.894 sys 0m0.457s 00:27:38.894 08:24:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:38.894 ************************************ 00:27:38.894 END TEST accel_rpc 00:27:38.894 ************************************ 00:27:38.894 08:24:12 -- common/autotest_common.sh@10 -- # set +x 00:27:38.894 08:24:12 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:27:38.894 08:24:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:38.894 08:24:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:38.894 08:24:12 -- common/autotest_common.sh@10 -- # set +x 00:27:38.894 ************************************ 00:27:38.894 START TEST app_cmdline 00:27:38.894 ************************************ 00:27:38.894 08:24:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:27:39.151 * Looking for test storage... 00:27:39.151 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:27:39.151 08:24:12 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:27:39.151 08:24:12 -- app/cmdline.sh@17 -- # spdk_tgt_pid=59837 00:27:39.151 08:24:12 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:27:39.151 08:24:12 -- app/cmdline.sh@18 -- # waitforlisten 59837 00:27:39.151 08:24:12 -- common/autotest_common.sh@819 -- # '[' -z 59837 ']' 00:27:39.151 08:24:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:39.151 08:24:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:39.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:39.151 08:24:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:39.151 08:24:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:39.151 08:24:12 -- common/autotest_common.sh@10 -- # set +x 00:27:39.151 [2024-04-17 08:24:12.293615] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:39.151 [2024-04-17 08:24:12.293711] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59837 ] 00:27:39.151 [2024-04-17 08:24:12.433107] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:39.409 [2024-04-17 08:24:12.532701] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:39.409 [2024-04-17 08:24:12.532838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.976 08:24:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:39.976 08:24:13 -- common/autotest_common.sh@852 -- # return 0 00:27:39.976 08:24:13 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:27:40.235 { 00:27:40.235 "fields": { 00:27:40.235 "commit": "36faa8c31", 00:27:40.235 "major": 24, 00:27:40.235 "minor": 1, 00:27:40.235 "patch": 1, 00:27:40.235 "suffix": "-pre" 00:27:40.235 }, 00:27:40.235 "version": "SPDK v24.01.1-pre git sha1 36faa8c31" 00:27:40.235 } 00:27:40.235 08:24:13 -- app/cmdline.sh@22 -- # expected_methods=() 00:27:40.235 08:24:13 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:27:40.235 08:24:13 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:27:40.235 08:24:13 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:27:40.235 08:24:13 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:27:40.235 08:24:13 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:27:40.235 08:24:13 -- app/cmdline.sh@26 -- # sort 00:27:40.235 08:24:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:40.235 08:24:13 -- common/autotest_common.sh@10 -- # set +x 00:27:40.235 08:24:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:40.235 08:24:13 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:27:40.235 08:24:13 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:27:40.235 08:24:13 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:27:40.235 08:24:13 -- common/autotest_common.sh@640 -- # local es=0 00:27:40.235 08:24:13 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:27:40.235 08:24:13 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:40.235 08:24:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:40.235 08:24:13 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:40.235 08:24:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:40.235 08:24:13 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:40.235 08:24:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:40.235 08:24:13 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:40.235 08:24:13 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:27:40.235 08:24:13 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:27:40.494 2024/04/17 08:24:13 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:27:40.494 request: 00:27:40.494 { 00:27:40.494 "method": "env_dpdk_get_mem_stats", 00:27:40.494 "params": {} 00:27:40.494 } 00:27:40.494 Got JSON-RPC error response 00:27:40.494 GoRPCClient: error on JSON-RPC call 00:27:40.494 08:24:13 -- common/autotest_common.sh@643 -- # es=1 00:27:40.494 08:24:13 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:40.494 08:24:13 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:40.494 08:24:13 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:40.494 08:24:13 -- app/cmdline.sh@1 -- # killprocess 59837 00:27:40.494 08:24:13 -- common/autotest_common.sh@926 -- # '[' -z 59837 ']' 00:27:40.494 08:24:13 -- common/autotest_common.sh@930 -- # kill -0 59837 00:27:40.494 08:24:13 -- common/autotest_common.sh@931 -- # uname 00:27:40.494 08:24:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:40.494 08:24:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 59837 00:27:40.494 08:24:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:40.494 08:24:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:40.494 killing process with pid 59837 00:27:40.494 08:24:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 59837' 00:27:40.494 08:24:13 -- common/autotest_common.sh@945 -- # kill 59837 00:27:40.494 08:24:13 -- common/autotest_common.sh@950 -- # wait 59837 00:27:40.752 00:27:40.752 real 0m1.840s 00:27:40.752 user 0m2.240s 00:27:40.752 sys 0m0.417s 00:27:40.752 08:24:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:40.752 ************************************ 00:27:40.752 END TEST app_cmdline 00:27:40.752 ************************************ 00:27:40.752 08:24:14 -- common/autotest_common.sh@10 -- # set +x 00:27:41.011 08:24:14 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:27:41.011 08:24:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:41.011 08:24:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:41.011 08:24:14 -- common/autotest_common.sh@10 -- # set +x 00:27:41.011 ************************************ 00:27:41.011 START TEST version 00:27:41.011 ************************************ 00:27:41.011 08:24:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:27:41.011 * Looking for test storage... 00:27:41.011 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:27:41.011 08:24:14 -- app/version.sh@17 -- # get_header_version major 00:27:41.011 08:24:14 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:27:41.011 08:24:14 -- app/version.sh@14 -- # cut -f2 00:27:41.011 08:24:14 -- app/version.sh@14 -- # tr -d '"' 00:27:41.011 08:24:14 -- app/version.sh@17 -- # major=24 00:27:41.011 08:24:14 -- app/version.sh@18 -- # get_header_version minor 00:27:41.011 08:24:14 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:27:41.011 08:24:14 -- app/version.sh@14 -- # cut -f2 00:27:41.011 08:24:14 -- app/version.sh@14 -- # tr -d '"' 00:27:41.011 08:24:14 -- app/version.sh@18 -- # minor=1 00:27:41.011 08:24:14 -- app/version.sh@19 -- # get_header_version patch 00:27:41.011 08:24:14 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:27:41.011 08:24:14 -- app/version.sh@14 -- # cut -f2 00:27:41.011 08:24:14 -- app/version.sh@14 -- # tr -d '"' 00:27:41.011 08:24:14 -- app/version.sh@19 -- # patch=1 00:27:41.011 08:24:14 -- app/version.sh@20 -- # get_header_version suffix 00:27:41.011 08:24:14 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:27:41.011 08:24:14 -- app/version.sh@14 -- # cut -f2 00:27:41.012 08:24:14 -- app/version.sh@14 -- # tr -d '"' 00:27:41.012 08:24:14 -- app/version.sh@20 -- # suffix=-pre 00:27:41.012 08:24:14 -- app/version.sh@22 -- # version=24.1 00:27:41.012 08:24:14 -- app/version.sh@25 -- # (( patch != 0 )) 00:27:41.012 08:24:14 -- app/version.sh@25 -- # version=24.1.1 00:27:41.012 08:24:14 -- app/version.sh@28 -- # version=24.1.1rc0 00:27:41.012 08:24:14 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:27:41.012 08:24:14 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:27:41.012 08:24:14 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:27:41.012 08:24:14 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:27:41.012 ************************************ 00:27:41.012 END TEST version 00:27:41.012 ************************************ 00:27:41.012 00:27:41.012 real 0m0.212s 00:27:41.012 user 0m0.129s 00:27:41.012 sys 0m0.135s 00:27:41.012 08:24:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:41.012 08:24:14 -- common/autotest_common.sh@10 -- # set +x 00:27:41.271 08:24:14 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:27:41.271 08:24:14 -- spdk/autotest.sh@204 -- # uname -s 00:27:41.271 08:24:14 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:27:41.271 08:24:14 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:27:41.271 08:24:14 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:27:41.271 08:24:14 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:27:41.271 08:24:14 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:27:41.271 08:24:14 -- spdk/autotest.sh@268 -- # timing_exit lib 00:27:41.271 08:24:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:41.271 08:24:14 -- common/autotest_common.sh@10 -- # set +x 00:27:41.271 08:24:14 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:27:41.271 08:24:14 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:27:41.271 08:24:14 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:27:41.271 08:24:14 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:27:41.271 08:24:14 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:27:41.271 08:24:14 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:27:41.271 08:24:14 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:27:41.271 08:24:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:41.271 08:24:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:41.271 08:24:14 -- common/autotest_common.sh@10 -- # set +x 00:27:41.271 ************************************ 00:27:41.271 START TEST nvmf_tcp 00:27:41.271 ************************************ 00:27:41.271 08:24:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:27:41.271 * Looking for test storage... 00:27:41.271 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:27:41.271 08:24:14 -- nvmf/nvmf.sh@10 -- # uname -s 00:27:41.271 08:24:14 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:27:41.271 08:24:14 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:41.271 08:24:14 -- nvmf/common.sh@7 -- # uname -s 00:27:41.271 08:24:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:41.271 08:24:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:41.271 08:24:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:41.271 08:24:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:41.271 08:24:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:41.271 08:24:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:41.271 08:24:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:41.271 08:24:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:41.271 08:24:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:41.271 08:24:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:41.271 08:24:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:27:41.271 08:24:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:27:41.271 08:24:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:41.271 08:24:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:41.271 08:24:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:41.271 08:24:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:41.271 08:24:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:41.271 08:24:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:41.271 08:24:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:41.271 08:24:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.271 08:24:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.271 08:24:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.271 08:24:14 -- paths/export.sh@5 -- # export PATH 00:27:41.271 08:24:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.271 08:24:14 -- nvmf/common.sh@46 -- # : 0 00:27:41.271 08:24:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:41.271 08:24:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:41.271 08:24:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:41.271 08:24:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:41.271 08:24:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:41.271 08:24:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:41.271 08:24:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:41.271 08:24:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:41.271 08:24:14 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:41.271 08:24:14 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:27:41.271 08:24:14 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:27:41.271 08:24:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:41.271 08:24:14 -- common/autotest_common.sh@10 -- # set +x 00:27:41.532 08:24:14 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:27:41.532 08:24:14 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:27:41.532 08:24:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:41.532 08:24:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:41.532 08:24:14 -- common/autotest_common.sh@10 -- # set +x 00:27:41.532 ************************************ 00:27:41.532 START TEST nvmf_example 00:27:41.532 ************************************ 00:27:41.532 08:24:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:27:41.532 * Looking for test storage... 00:27:41.532 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:41.532 08:24:14 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:41.532 08:24:14 -- nvmf/common.sh@7 -- # uname -s 00:27:41.532 08:24:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:41.532 08:24:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:41.532 08:24:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:41.532 08:24:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:41.532 08:24:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:41.532 08:24:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:41.532 08:24:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:41.532 08:24:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:41.532 08:24:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:41.532 08:24:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:41.532 08:24:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:27:41.532 08:24:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:27:41.532 08:24:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:41.532 08:24:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:41.532 08:24:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:41.532 08:24:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:41.532 08:24:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:41.532 08:24:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:41.532 08:24:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:41.532 08:24:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.532 08:24:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.533 08:24:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.533 08:24:14 -- paths/export.sh@5 -- # export PATH 00:27:41.533 08:24:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.533 08:24:14 -- nvmf/common.sh@46 -- # : 0 00:27:41.533 08:24:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:41.533 08:24:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:41.533 08:24:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:41.533 08:24:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:41.533 08:24:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:41.533 08:24:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:41.533 08:24:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:41.533 08:24:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:41.533 08:24:14 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:27:41.533 08:24:14 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:41.533 08:24:14 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:41.533 08:24:14 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:27:41.533 08:24:14 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:27:41.533 08:24:14 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:27:41.533 08:24:14 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:27:41.533 08:24:14 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:27:41.533 08:24:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:41.533 08:24:14 -- common/autotest_common.sh@10 -- # set +x 00:27:41.533 08:24:14 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:27:41.533 08:24:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:41.533 08:24:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:41.533 08:24:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:41.533 08:24:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:41.533 08:24:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:41.533 08:24:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.533 08:24:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:41.533 08:24:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:41.533 08:24:14 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:27:41.533 08:24:14 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:27:41.533 08:24:14 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:27:41.533 08:24:14 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:27:41.533 08:24:14 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:27:41.533 08:24:14 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:27:41.533 08:24:14 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:41.533 08:24:14 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:41.533 08:24:14 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:41.533 08:24:14 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:27:41.533 08:24:14 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:41.533 08:24:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:41.533 08:24:14 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:41.533 08:24:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:41.533 08:24:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:41.533 08:24:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:41.533 08:24:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:41.533 08:24:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:41.533 08:24:14 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:27:41.533 Cannot find device "nvmf_init_br" 00:27:41.533 08:24:14 -- nvmf/common.sh@153 -- # true 00:27:41.533 08:24:14 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:27:41.533 Cannot find device "nvmf_tgt_br" 00:27:41.533 08:24:14 -- nvmf/common.sh@154 -- # true 00:27:41.533 08:24:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:27:41.533 Cannot find device "nvmf_tgt_br2" 00:27:41.533 08:24:14 -- nvmf/common.sh@155 -- # true 00:27:41.533 08:24:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:27:41.804 Cannot find device "nvmf_init_br" 00:27:41.804 08:24:14 -- nvmf/common.sh@156 -- # true 00:27:41.804 08:24:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:27:41.804 Cannot find device "nvmf_tgt_br" 00:27:41.804 08:24:14 -- nvmf/common.sh@157 -- # true 00:27:41.804 08:24:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:27:41.804 Cannot find device "nvmf_tgt_br2" 00:27:41.804 08:24:14 -- nvmf/common.sh@158 -- # true 00:27:41.804 08:24:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:27:41.804 Cannot find device "nvmf_br" 00:27:41.804 08:24:14 -- nvmf/common.sh@159 -- # true 00:27:41.804 08:24:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:27:41.804 Cannot find device "nvmf_init_if" 00:27:41.804 08:24:14 -- nvmf/common.sh@160 -- # true 00:27:41.804 08:24:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:41.804 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:41.804 08:24:14 -- nvmf/common.sh@161 -- # true 00:27:41.804 08:24:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:41.804 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:41.804 08:24:14 -- nvmf/common.sh@162 -- # true 00:27:41.804 08:24:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:27:41.804 08:24:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:41.804 08:24:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:41.804 08:24:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:41.804 08:24:14 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:41.804 08:24:15 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:41.804 08:24:15 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:41.804 08:24:15 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:41.804 08:24:15 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:41.804 08:24:15 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:27:41.804 08:24:15 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:27:41.804 08:24:15 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:27:41.804 08:24:15 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:27:41.804 08:24:15 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:41.804 08:24:15 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:41.804 08:24:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:41.804 08:24:15 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:27:42.063 08:24:15 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:27:42.063 08:24:15 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:27:42.063 08:24:15 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:42.063 08:24:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:42.063 08:24:15 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:42.063 08:24:15 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:42.063 08:24:15 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:27:42.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:42.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:27:42.063 00:27:42.063 --- 10.0.0.2 ping statistics --- 00:27:42.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:42.063 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:27:42.063 08:24:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:27:42.063 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:42.063 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.124 ms 00:27:42.063 00:27:42.063 --- 10.0.0.3 ping statistics --- 00:27:42.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:42.063 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:27:42.063 08:24:15 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:42.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:42.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:27:42.063 00:27:42.063 --- 10.0.0.1 ping statistics --- 00:27:42.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:42.063 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:27:42.063 08:24:15 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:42.063 08:24:15 -- nvmf/common.sh@421 -- # return 0 00:27:42.063 08:24:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:42.063 08:24:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:42.063 08:24:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:42.063 08:24:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:42.063 08:24:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:42.063 08:24:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:42.063 08:24:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:42.063 08:24:15 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:27:42.063 08:24:15 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:27:42.063 08:24:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:42.063 08:24:15 -- common/autotest_common.sh@10 -- # set +x 00:27:42.063 08:24:15 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:27:42.063 08:24:15 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:27:42.063 08:24:15 -- target/nvmf_example.sh@34 -- # nvmfpid=60181 00:27:42.063 08:24:15 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:42.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:42.063 08:24:15 -- target/nvmf_example.sh@36 -- # waitforlisten 60181 00:27:42.063 08:24:15 -- common/autotest_common.sh@819 -- # '[' -z 60181 ']' 00:27:42.063 08:24:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:42.063 08:24:15 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:27:42.063 08:24:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:42.063 08:24:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:42.063 08:24:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:42.063 08:24:15 -- common/autotest_common.sh@10 -- # set +x 00:27:42.999 08:24:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:42.999 08:24:16 -- common/autotest_common.sh@852 -- # return 0 00:27:42.999 08:24:16 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:27:42.999 08:24:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:42.999 08:24:16 -- common/autotest_common.sh@10 -- # set +x 00:27:42.999 08:24:16 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:42.999 08:24:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:42.999 08:24:16 -- common/autotest_common.sh@10 -- # set +x 00:27:42.999 08:24:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:42.999 08:24:16 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:27:42.999 08:24:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:42.999 08:24:16 -- common/autotest_common.sh@10 -- # set +x 00:27:43.256 08:24:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:43.256 08:24:16 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:27:43.256 08:24:16 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:43.256 08:24:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:43.256 08:24:16 -- common/autotest_common.sh@10 -- # set +x 00:27:43.256 08:24:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:43.256 08:24:16 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:27:43.256 08:24:16 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:43.256 08:24:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:43.256 08:24:16 -- common/autotest_common.sh@10 -- # set +x 00:27:43.256 08:24:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:43.256 08:24:16 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:43.256 08:24:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:43.256 08:24:16 -- common/autotest_common.sh@10 -- # set +x 00:27:43.256 08:24:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:43.256 08:24:16 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:27:43.256 08:24:16 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:53.256 Initializing NVMe Controllers 00:27:53.256 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:53.256 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:53.256 Initialization complete. Launching workers. 00:27:53.256 ======================================================== 00:27:53.256 Latency(us) 00:27:53.256 Device Information : IOPS MiB/s Average min max 00:27:53.256 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15697.44 61.32 4076.70 599.42 28973.32 00:27:53.256 ======================================================== 00:27:53.256 Total : 15697.44 61.32 4076.70 599.42 28973.32 00:27:53.256 00:27:53.256 08:24:26 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:27:53.256 08:24:26 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:27:53.256 08:24:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:53.256 08:24:26 -- nvmf/common.sh@116 -- # sync 00:27:53.516 08:24:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:53.516 08:24:26 -- nvmf/common.sh@119 -- # set +e 00:27:53.516 08:24:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:53.516 08:24:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:53.516 rmmod nvme_tcp 00:27:53.516 rmmod nvme_fabrics 00:27:53.516 rmmod nvme_keyring 00:27:53.516 08:24:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:53.516 08:24:26 -- nvmf/common.sh@123 -- # set -e 00:27:53.516 08:24:26 -- nvmf/common.sh@124 -- # return 0 00:27:53.516 08:24:26 -- nvmf/common.sh@477 -- # '[' -n 60181 ']' 00:27:53.516 08:24:26 -- nvmf/common.sh@478 -- # killprocess 60181 00:27:53.516 08:24:26 -- common/autotest_common.sh@926 -- # '[' -z 60181 ']' 00:27:53.516 08:24:26 -- common/autotest_common.sh@930 -- # kill -0 60181 00:27:53.516 08:24:26 -- common/autotest_common.sh@931 -- # uname 00:27:53.516 08:24:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:53.516 08:24:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60181 00:27:53.516 killing process with pid 60181 00:27:53.516 08:24:26 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:27:53.516 08:24:26 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:27:53.516 08:24:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60181' 00:27:53.516 08:24:26 -- common/autotest_common.sh@945 -- # kill 60181 00:27:53.516 08:24:26 -- common/autotest_common.sh@950 -- # wait 60181 00:27:53.774 nvmf threads initialize successfully 00:27:53.774 bdev subsystem init successfully 00:27:53.774 created a nvmf target service 00:27:53.774 create targets's poll groups done 00:27:53.774 all subsystems of target started 00:27:53.774 nvmf target is running 00:27:53.774 all subsystems of target stopped 00:27:53.774 destroy targets's poll groups done 00:27:53.774 destroyed the nvmf target service 00:27:53.774 bdev subsystem finish successfully 00:27:53.774 nvmf threads destroy successfully 00:27:53.774 08:24:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:53.774 08:24:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:53.774 08:24:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:53.774 08:24:27 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:53.774 08:24:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:53.774 08:24:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.774 08:24:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:53.774 08:24:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:53.774 08:24:27 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:27:53.774 08:24:27 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:27:53.774 08:24:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:53.774 08:24:27 -- common/autotest_common.sh@10 -- # set +x 00:27:54.035 ************************************ 00:27:54.035 END TEST nvmf_example 00:27:54.035 ************************************ 00:27:54.035 00:27:54.035 real 0m12.516s 00:27:54.035 user 0m44.776s 00:27:54.035 sys 0m1.728s 00:27:54.035 08:24:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:54.035 08:24:27 -- common/autotest_common.sh@10 -- # set +x 00:27:54.035 08:24:27 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:27:54.035 08:24:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:54.035 08:24:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:54.035 08:24:27 -- common/autotest_common.sh@10 -- # set +x 00:27:54.035 ************************************ 00:27:54.035 START TEST nvmf_filesystem 00:27:54.035 ************************************ 00:27:54.035 08:24:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:27:54.035 * Looking for test storage... 00:27:54.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:54.035 08:24:27 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:27:54.035 08:24:27 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:27:54.035 08:24:27 -- common/autotest_common.sh@34 -- # set -e 00:27:54.035 08:24:27 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:27:54.035 08:24:27 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:27:54.035 08:24:27 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:27:54.035 08:24:27 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:27:54.035 08:24:27 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:27:54.035 08:24:27 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:27:54.035 08:24:27 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:27:54.035 08:24:27 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:27:54.035 08:24:27 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:27:54.035 08:24:27 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:27:54.035 08:24:27 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:27:54.035 08:24:27 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:27:54.035 08:24:27 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:27:54.035 08:24:27 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:27:54.035 08:24:27 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:27:54.035 08:24:27 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:27:54.035 08:24:27 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:27:54.035 08:24:27 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:27:54.035 08:24:27 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:27:54.035 08:24:27 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:27:54.035 08:24:27 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:27:54.035 08:24:27 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:27:54.035 08:24:27 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:27:54.035 08:24:27 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:27:54.035 08:24:27 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:27:54.035 08:24:27 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:27:54.035 08:24:27 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:27:54.035 08:24:27 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:27:54.035 08:24:27 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:27:54.035 08:24:27 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:27:54.036 08:24:27 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:27:54.036 08:24:27 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:27:54.036 08:24:27 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:27:54.036 08:24:27 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:27:54.036 08:24:27 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:27:54.036 08:24:27 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:27:54.036 08:24:27 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:27:54.036 08:24:27 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:27:54.036 08:24:27 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:27:54.036 08:24:27 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:27:54.036 08:24:27 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:27:54.036 08:24:27 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:27:54.036 08:24:27 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:27:54.036 08:24:27 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:27:54.036 08:24:27 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:27:54.036 08:24:27 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:27:54.036 08:24:27 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:27:54.036 08:24:27 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:27:54.036 08:24:27 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:27:54.036 08:24:27 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:27:54.036 08:24:27 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:27:54.036 08:24:27 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:27:54.036 08:24:27 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:27:54.036 08:24:27 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:27:54.036 08:24:27 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:27:54.036 08:24:27 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:27:54.036 08:24:27 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:27:54.036 08:24:27 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:27:54.036 08:24:27 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:27:54.036 08:24:27 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:27:54.036 08:24:27 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:27:54.036 08:24:27 -- common/build_config.sh@58 -- # CONFIG_GOLANG=y 00:27:54.036 08:24:27 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:27:54.036 08:24:27 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:27:54.036 08:24:27 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:27:54.036 08:24:27 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:27:54.036 08:24:27 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:27:54.036 08:24:27 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:27:54.036 08:24:27 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:27:54.036 08:24:27 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:27:54.036 08:24:27 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:27:54.036 08:24:27 -- common/build_config.sh@68 -- # CONFIG_AVAHI=y 00:27:54.036 08:24:27 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:27:54.036 08:24:27 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:27:54.036 08:24:27 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:27:54.036 08:24:27 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:27:54.036 08:24:27 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:27:54.036 08:24:27 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:27:54.036 08:24:27 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:27:54.036 08:24:27 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:27:54.036 08:24:27 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:27:54.036 08:24:27 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:27:54.036 08:24:27 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:27:54.036 08:24:27 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:27:54.036 08:24:27 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:27:54.036 08:24:27 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:27:54.036 08:24:27 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:27:54.036 08:24:27 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:27:54.036 08:24:27 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:27:54.036 08:24:27 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:27:54.036 08:24:27 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:27:54.036 08:24:27 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:27:54.036 08:24:27 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:27:54.036 08:24:27 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:27:54.036 08:24:27 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:27:54.036 08:24:27 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:27:54.036 08:24:27 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:27:54.036 08:24:27 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:27:54.036 08:24:27 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:27:54.036 #define SPDK_CONFIG_H 00:27:54.036 #define SPDK_CONFIG_APPS 1 00:27:54.036 #define SPDK_CONFIG_ARCH native 00:27:54.036 #undef SPDK_CONFIG_ASAN 00:27:54.036 #define SPDK_CONFIG_AVAHI 1 00:27:54.036 #undef SPDK_CONFIG_CET 00:27:54.036 #define SPDK_CONFIG_COVERAGE 1 00:27:54.036 #define SPDK_CONFIG_CROSS_PREFIX 00:27:54.036 #undef SPDK_CONFIG_CRYPTO 00:27:54.036 #undef SPDK_CONFIG_CRYPTO_MLX5 00:27:54.036 #undef SPDK_CONFIG_CUSTOMOCF 00:27:54.036 #undef SPDK_CONFIG_DAOS 00:27:54.036 #define SPDK_CONFIG_DAOS_DIR 00:27:54.036 #define SPDK_CONFIG_DEBUG 1 00:27:54.036 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:27:54.036 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:27:54.036 #define SPDK_CONFIG_DPDK_INC_DIR 00:27:54.036 #define SPDK_CONFIG_DPDK_LIB_DIR 00:27:54.036 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:27:54.036 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:27:54.036 #define SPDK_CONFIG_EXAMPLES 1 00:27:54.036 #undef SPDK_CONFIG_FC 00:27:54.036 #define SPDK_CONFIG_FC_PATH 00:27:54.036 #define SPDK_CONFIG_FIO_PLUGIN 1 00:27:54.036 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:27:54.036 #undef SPDK_CONFIG_FUSE 00:27:54.036 #undef SPDK_CONFIG_FUZZER 00:27:54.036 #define SPDK_CONFIG_FUZZER_LIB 00:27:54.036 #define SPDK_CONFIG_GOLANG 1 00:27:54.036 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:27:54.036 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:27:54.036 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:27:54.036 #undef SPDK_CONFIG_HAVE_LIBBSD 00:27:54.036 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:27:54.036 #define SPDK_CONFIG_IDXD 1 00:27:54.036 #undef SPDK_CONFIG_IDXD_KERNEL 00:27:54.036 #undef SPDK_CONFIG_IPSEC_MB 00:27:54.036 #define SPDK_CONFIG_IPSEC_MB_DIR 00:27:54.036 #define SPDK_CONFIG_ISAL 1 00:27:54.036 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:27:54.036 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:27:54.036 #define SPDK_CONFIG_LIBDIR 00:27:54.036 #undef SPDK_CONFIG_LTO 00:27:54.036 #define SPDK_CONFIG_MAX_LCORES 00:27:54.036 #define SPDK_CONFIG_NVME_CUSE 1 00:27:54.036 #undef SPDK_CONFIG_OCF 00:27:54.036 #define SPDK_CONFIG_OCF_PATH 00:27:54.036 #define SPDK_CONFIG_OPENSSL_PATH 00:27:54.036 #undef SPDK_CONFIG_PGO_CAPTURE 00:27:54.036 #undef SPDK_CONFIG_PGO_USE 00:27:54.036 #define SPDK_CONFIG_PREFIX /usr/local 00:27:54.036 #undef SPDK_CONFIG_RAID5F 00:27:54.036 #undef SPDK_CONFIG_RBD 00:27:54.036 #define SPDK_CONFIG_RDMA 1 00:27:54.036 #define SPDK_CONFIG_RDMA_PROV verbs 00:27:54.036 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:27:54.036 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:27:54.036 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:27:54.036 #define SPDK_CONFIG_SHARED 1 00:27:54.036 #undef SPDK_CONFIG_SMA 00:27:54.036 #define SPDK_CONFIG_TESTS 1 00:27:54.036 #undef SPDK_CONFIG_TSAN 00:27:54.036 #define SPDK_CONFIG_UBLK 1 00:27:54.036 #define SPDK_CONFIG_UBSAN 1 00:27:54.036 #undef SPDK_CONFIG_UNIT_TESTS 00:27:54.036 #undef SPDK_CONFIG_URING 00:27:54.036 #define SPDK_CONFIG_URING_PATH 00:27:54.036 #undef SPDK_CONFIG_URING_ZNS 00:27:54.036 #define SPDK_CONFIG_USDT 1 00:27:54.036 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:27:54.036 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:27:54.036 #define SPDK_CONFIG_VFIO_USER 1 00:27:54.036 #define SPDK_CONFIG_VFIO_USER_DIR 00:27:54.036 #define SPDK_CONFIG_VHOST 1 00:27:54.036 #define SPDK_CONFIG_VIRTIO 1 00:27:54.036 #undef SPDK_CONFIG_VTUNE 00:27:54.036 #define SPDK_CONFIG_VTUNE_DIR 00:27:54.036 #define SPDK_CONFIG_WERROR 1 00:27:54.036 #define SPDK_CONFIG_WPDK_DIR 00:27:54.036 #undef SPDK_CONFIG_XNVME 00:27:54.036 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:27:54.036 08:24:27 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:27:54.036 08:24:27 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:54.036 08:24:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:54.036 08:24:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:54.036 08:24:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:54.036 08:24:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.036 08:24:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.037 08:24:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.037 08:24:27 -- paths/export.sh@5 -- # export PATH 00:27:54.037 08:24:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.037 08:24:27 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:27:54.037 08:24:27 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:27:54.037 08:24:27 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:27:54.037 08:24:27 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:27:54.037 08:24:27 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:27:54.037 08:24:27 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:27:54.037 08:24:27 -- pm/common@16 -- # TEST_TAG=N/A 00:27:54.037 08:24:27 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:27:54.037 08:24:27 -- common/autotest_common.sh@52 -- # : 1 00:27:54.037 08:24:27 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:27:54.037 08:24:27 -- common/autotest_common.sh@56 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:27:54.037 08:24:27 -- common/autotest_common.sh@58 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:27:54.037 08:24:27 -- common/autotest_common.sh@60 -- # : 1 00:27:54.037 08:24:27 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:27:54.037 08:24:27 -- common/autotest_common.sh@62 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:27:54.037 08:24:27 -- common/autotest_common.sh@64 -- # : 00:27:54.037 08:24:27 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:27:54.037 08:24:27 -- common/autotest_common.sh@66 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:27:54.037 08:24:27 -- common/autotest_common.sh@68 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:27:54.037 08:24:27 -- common/autotest_common.sh@70 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:27:54.037 08:24:27 -- common/autotest_common.sh@72 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:27:54.037 08:24:27 -- common/autotest_common.sh@74 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:27:54.037 08:24:27 -- common/autotest_common.sh@76 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:27:54.037 08:24:27 -- common/autotest_common.sh@78 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:27:54.037 08:24:27 -- common/autotest_common.sh@80 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:27:54.037 08:24:27 -- common/autotest_common.sh@82 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:27:54.037 08:24:27 -- common/autotest_common.sh@84 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:27:54.037 08:24:27 -- common/autotest_common.sh@86 -- # : 1 00:27:54.037 08:24:27 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:27:54.037 08:24:27 -- common/autotest_common.sh@88 -- # : 1 00:27:54.037 08:24:27 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:27:54.037 08:24:27 -- common/autotest_common.sh@90 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:27:54.037 08:24:27 -- common/autotest_common.sh@92 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:27:54.037 08:24:27 -- common/autotest_common.sh@94 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:27:54.037 08:24:27 -- common/autotest_common.sh@96 -- # : tcp 00:27:54.037 08:24:27 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:27:54.037 08:24:27 -- common/autotest_common.sh@98 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:27:54.037 08:24:27 -- common/autotest_common.sh@100 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:27:54.037 08:24:27 -- common/autotest_common.sh@102 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:27:54.037 08:24:27 -- common/autotest_common.sh@104 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:27:54.037 08:24:27 -- common/autotest_common.sh@106 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:27:54.037 08:24:27 -- common/autotest_common.sh@108 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:27:54.037 08:24:27 -- common/autotest_common.sh@110 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:27:54.037 08:24:27 -- common/autotest_common.sh@112 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:27:54.037 08:24:27 -- common/autotest_common.sh@114 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:27:54.037 08:24:27 -- common/autotest_common.sh@116 -- # : 1 00:27:54.037 08:24:27 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:27:54.037 08:24:27 -- common/autotest_common.sh@118 -- # : 00:27:54.037 08:24:27 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:27:54.037 08:24:27 -- common/autotest_common.sh@120 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:27:54.037 08:24:27 -- common/autotest_common.sh@122 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:27:54.037 08:24:27 -- common/autotest_common.sh@124 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:27:54.037 08:24:27 -- common/autotest_common.sh@126 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:27:54.037 08:24:27 -- common/autotest_common.sh@128 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:27:54.037 08:24:27 -- common/autotest_common.sh@130 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:27:54.037 08:24:27 -- common/autotest_common.sh@132 -- # : 00:27:54.037 08:24:27 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:27:54.037 08:24:27 -- common/autotest_common.sh@134 -- # : true 00:27:54.037 08:24:27 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:27:54.037 08:24:27 -- common/autotest_common.sh@136 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:27:54.037 08:24:27 -- common/autotest_common.sh@138 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:27:54.037 08:24:27 -- common/autotest_common.sh@140 -- # : 1 00:27:54.037 08:24:27 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:27:54.037 08:24:27 -- common/autotest_common.sh@142 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:27:54.037 08:24:27 -- common/autotest_common.sh@144 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:27:54.037 08:24:27 -- common/autotest_common.sh@146 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:27:54.037 08:24:27 -- common/autotest_common.sh@148 -- # : 00:27:54.037 08:24:27 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:27:54.037 08:24:27 -- common/autotest_common.sh@150 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:27:54.037 08:24:27 -- common/autotest_common.sh@152 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:27:54.037 08:24:27 -- common/autotest_common.sh@154 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:27:54.037 08:24:27 -- common/autotest_common.sh@156 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:27:54.037 08:24:27 -- common/autotest_common.sh@158 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:27:54.037 08:24:27 -- common/autotest_common.sh@160 -- # : 0 00:27:54.037 08:24:27 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:27:54.037 08:24:27 -- common/autotest_common.sh@163 -- # : 00:27:54.037 08:24:27 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:27:54.037 08:24:27 -- common/autotest_common.sh@165 -- # : 1 00:27:54.037 08:24:27 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:27:54.037 08:24:27 -- common/autotest_common.sh@167 -- # : 1 00:27:54.037 08:24:27 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:27:54.037 08:24:27 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:27:54.037 08:24:27 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:27:54.037 08:24:27 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:27:54.037 08:24:27 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:27:54.037 08:24:27 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:27:54.037 08:24:27 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:27:54.037 08:24:27 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:27:54.038 08:24:27 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:27:54.038 08:24:27 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:27:54.038 08:24:27 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:27:54.038 08:24:27 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:27:54.038 08:24:27 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:27:54.038 08:24:27 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:27:54.038 08:24:27 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:27:54.038 08:24:27 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:27:54.038 08:24:27 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:27:54.038 08:24:27 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:27:54.038 08:24:27 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:27:54.038 08:24:27 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:27:54.038 08:24:27 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:27:54.038 08:24:27 -- common/autotest_common.sh@196 -- # cat 00:27:54.038 08:24:27 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:27:54.038 08:24:27 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:27:54.038 08:24:27 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:27:54.038 08:24:27 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:27:54.038 08:24:27 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:27:54.038 08:24:27 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:27:54.038 08:24:27 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:27:54.038 08:24:27 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:27:54.038 08:24:27 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:27:54.038 08:24:27 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:27:54.038 08:24:27 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:27:54.038 08:24:27 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:27:54.038 08:24:27 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:27:54.038 08:24:27 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:27:54.038 08:24:27 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:27:54.038 08:24:27 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:27:54.038 08:24:27 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:27:54.038 08:24:27 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:27:54.038 08:24:27 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:27:54.038 08:24:27 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:27:54.038 08:24:27 -- common/autotest_common.sh@249 -- # export valgrind= 00:27:54.038 08:24:27 -- common/autotest_common.sh@249 -- # valgrind= 00:27:54.038 08:24:27 -- common/autotest_common.sh@255 -- # uname -s 00:27:54.038 08:24:27 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:27:54.038 08:24:27 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:27:54.038 08:24:27 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:27:54.038 08:24:27 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:27:54.038 08:24:27 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:27:54.038 08:24:27 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:27:54.038 08:24:27 -- common/autotest_common.sh@265 -- # MAKE=make 00:27:54.038 08:24:27 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:27:54.038 08:24:27 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:27:54.038 08:24:27 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:27:54.038 08:24:27 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:27:54.038 08:24:27 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:27:54.038 08:24:27 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:27:54.038 08:24:27 -- common/autotest_common.sh@291 -- # for i in "$@" 00:27:54.038 08:24:27 -- common/autotest_common.sh@292 -- # case "$i" in 00:27:54.038 08:24:27 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:27:54.038 08:24:27 -- common/autotest_common.sh@309 -- # [[ -z 60434 ]] 00:27:54.038 08:24:27 -- common/autotest_common.sh@309 -- # kill -0 60434 00:27:54.297 08:24:27 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:27:54.298 08:24:27 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:27:54.298 08:24:27 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:27:54.298 08:24:27 -- common/autotest_common.sh@322 -- # local mount target_dir 00:27:54.298 08:24:27 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:27:54.298 08:24:27 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:27:54.298 08:24:27 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:27:54.298 08:24:27 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:27:54.298 08:24:27 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.lbBP0j 00:27:54.298 08:24:27 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:27:54.298 08:24:27 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:27:54.298 08:24:27 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:27:54.298 08:24:27 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.lbBP0j/tests/target /tmp/spdk.lbBP0j 00:27:54.298 08:24:27 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:27:54.298 08:24:27 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:54.298 08:24:27 -- common/autotest_common.sh@318 -- # df -T 00:27:54.298 08:24:27 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:27:54.298 08:24:27 -- common/autotest_common.sh@352 -- # mounts["$mount"]=devtmpfs 00:27:54.298 08:24:27 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:27:54.298 08:24:27 -- common/autotest_common.sh@353 -- # avails["$mount"]=4194304 00:27:54.298 08:24:27 -- common/autotest_common.sh@353 -- # sizes["$mount"]=4194304 00:27:54.298 08:24:27 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:27:54.298 08:24:27 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:54.298 08:24:27 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:27:54.298 08:24:27 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:27:54.298 08:24:27 -- common/autotest_common.sh@353 -- # avails["$mount"]=6266626048 00:27:54.298 08:24:27 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6267883520 00:27:54.298 08:24:27 -- common/autotest_common.sh@354 -- # uses["$mount"]=1257472 00:27:54.298 08:24:27 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:54.298 08:24:27 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:27:54.298 08:24:27 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:27:54.298 08:24:27 -- common/autotest_common.sh@353 -- # avails["$mount"]=2494349312 00:27:54.298 08:24:27 -- common/autotest_common.sh@353 -- # sizes["$mount"]=2507153408 00:27:54.298 08:24:27 -- common/autotest_common.sh@354 -- # uses["$mount"]=12804096 00:27:54.298 08:24:27 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:54.298 08:24:27 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda5 00:27:54.298 08:24:27 -- common/autotest_common.sh@352 -- # fss["$mount"]=btrfs 00:27:54.298 08:24:27 -- common/autotest_common.sh@353 -- # avails["$mount"]=13810819072 00:27:54.298 08:24:27 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20314062848 00:27:54.298 08:24:27 -- common/autotest_common.sh@354 -- # uses["$mount"]=5213839360 00:27:54.298 08:24:27 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:54.298 08:24:27 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda5 00:27:54.298 08:24:27 -- common/autotest_common.sh@352 -- # fss["$mount"]=btrfs 00:27:54.298 08:24:27 -- common/autotest_common.sh@353 -- # avails["$mount"]=13810819072 00:27:54.298 08:24:27 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20314062848 00:27:54.298 08:24:27 -- common/autotest_common.sh@354 -- # uses["$mount"]=5213839360 00:27:54.298 08:24:27 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:54.298 08:24:27 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:27:54.298 08:24:27 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:27:54.298 08:24:27 -- common/autotest_common.sh@353 -- # avails["$mount"]=6267748352 00:27:54.298 08:24:27 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6267883520 00:27:54.298 08:24:27 -- common/autotest_common.sh@354 -- # uses["$mount"]=135168 00:27:54.298 08:24:27 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:54.298 08:24:27 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda2 00:27:54.298 08:24:27 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:27:54.298 08:24:27 -- common/autotest_common.sh@353 -- # avails["$mount"]=843546624 00:27:54.298 08:24:27 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1012768768 00:27:54.298 08:24:27 -- common/autotest_common.sh@354 -- # uses["$mount"]=100016128 00:27:54.298 08:24:27 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:54.298 08:24:27 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda3 00:27:54.298 08:24:27 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:27:54.298 08:24:27 -- common/autotest_common.sh@353 -- # avails["$mount"]=92499968 00:27:54.298 08:24:27 -- common/autotest_common.sh@353 -- # sizes["$mount"]=104607744 00:27:54.298 08:24:27 -- common/autotest_common.sh@354 -- # uses["$mount"]=12107776 00:27:54.298 08:24:27 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:54.298 08:24:27 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:27:54.298 08:24:27 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:27:54.298 08:24:27 -- common/autotest_common.sh@353 -- # avails["$mount"]=1253572608 00:27:54.298 08:24:27 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1253576704 00:27:54.298 08:24:27 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:27:54.298 08:24:27 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:54.298 08:24:27 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt/output 00:27:54.298 08:24:27 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:27:54.298 08:24:27 -- common/autotest_common.sh@353 -- # avails["$mount"]=93859979264 00:27:54.298 08:24:27 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:27:54.298 08:24:27 -- common/autotest_common.sh@354 -- # uses["$mount"]=5842800640 00:27:54.298 08:24:27 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:27:54.298 08:24:27 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:27:54.298 * Looking for test storage... 00:27:54.298 08:24:27 -- common/autotest_common.sh@359 -- # local target_space new_size 00:27:54.298 08:24:27 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:27:54.298 08:24:27 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:54.298 08:24:27 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:27:54.298 08:24:27 -- common/autotest_common.sh@363 -- # mount=/home 00:27:54.298 08:24:27 -- common/autotest_common.sh@365 -- # target_space=13810819072 00:27:54.298 08:24:27 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:27:54.298 08:24:27 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:27:54.298 08:24:27 -- common/autotest_common.sh@371 -- # [[ btrfs == tmpfs ]] 00:27:54.298 08:24:27 -- common/autotest_common.sh@371 -- # [[ btrfs == ramfs ]] 00:27:54.298 08:24:27 -- common/autotest_common.sh@371 -- # [[ /home == / ]] 00:27:54.298 08:24:27 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:54.298 08:24:27 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:54.298 08:24:27 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:54.298 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:54.298 08:24:27 -- common/autotest_common.sh@380 -- # return 0 00:27:54.298 08:24:27 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:27:54.298 08:24:27 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:27:54.298 08:24:27 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:27:54.298 08:24:27 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:27:54.298 08:24:27 -- common/autotest_common.sh@1672 -- # true 00:27:54.298 08:24:27 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:27:54.298 08:24:27 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:27:54.298 08:24:27 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:27:54.298 08:24:27 -- common/autotest_common.sh@27 -- # exec 00:27:54.298 08:24:27 -- common/autotest_common.sh@29 -- # exec 00:27:54.298 08:24:27 -- common/autotest_common.sh@31 -- # xtrace_restore 00:27:54.298 08:24:27 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:27:54.298 08:24:27 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:27:54.298 08:24:27 -- common/autotest_common.sh@18 -- # set -x 00:27:54.298 08:24:27 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:54.298 08:24:27 -- nvmf/common.sh@7 -- # uname -s 00:27:54.298 08:24:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:54.298 08:24:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:54.298 08:24:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:54.298 08:24:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:54.298 08:24:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:54.298 08:24:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:54.298 08:24:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:54.298 08:24:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:54.298 08:24:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:54.298 08:24:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:54.298 08:24:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:27:54.298 08:24:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:27:54.298 08:24:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:54.298 08:24:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:54.298 08:24:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:54.298 08:24:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:54.298 08:24:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:54.298 08:24:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:54.298 08:24:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:54.298 08:24:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.299 08:24:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.299 08:24:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.299 08:24:27 -- paths/export.sh@5 -- # export PATH 00:27:54.299 08:24:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.299 08:24:27 -- nvmf/common.sh@46 -- # : 0 00:27:54.299 08:24:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:54.299 08:24:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:54.299 08:24:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:54.299 08:24:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:54.299 08:24:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:54.299 08:24:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:54.299 08:24:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:54.299 08:24:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:54.299 08:24:27 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:27:54.299 08:24:27 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:54.299 08:24:27 -- target/filesystem.sh@15 -- # nvmftestinit 00:27:54.299 08:24:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:54.299 08:24:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:54.299 08:24:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:54.299 08:24:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:54.299 08:24:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:54.299 08:24:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:54.299 08:24:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:54.299 08:24:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.299 08:24:27 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:27:54.299 08:24:27 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:27:54.299 08:24:27 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:27:54.299 08:24:27 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:27:54.299 08:24:27 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:27:54.299 08:24:27 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:27:54.299 08:24:27 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:54.299 08:24:27 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:54.299 08:24:27 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:54.299 08:24:27 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:27:54.299 08:24:27 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:54.299 08:24:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:54.299 08:24:27 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:54.299 08:24:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:54.299 08:24:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:54.299 08:24:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:54.299 08:24:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:54.299 08:24:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:54.299 08:24:27 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:27:54.299 08:24:27 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:27:54.299 Cannot find device "nvmf_tgt_br" 00:27:54.299 08:24:27 -- nvmf/common.sh@154 -- # true 00:27:54.299 08:24:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:27:54.299 Cannot find device "nvmf_tgt_br2" 00:27:54.299 08:24:27 -- nvmf/common.sh@155 -- # true 00:27:54.299 08:24:27 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:27:54.299 08:24:27 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:27:54.299 Cannot find device "nvmf_tgt_br" 00:27:54.299 08:24:27 -- nvmf/common.sh@157 -- # true 00:27:54.299 08:24:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:27:54.299 Cannot find device "nvmf_tgt_br2" 00:27:54.299 08:24:27 -- nvmf/common.sh@158 -- # true 00:27:54.299 08:24:27 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:27:54.299 08:24:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:27:54.299 08:24:27 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:54.299 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:54.299 08:24:27 -- nvmf/common.sh@161 -- # true 00:27:54.299 08:24:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:54.299 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:54.299 08:24:27 -- nvmf/common.sh@162 -- # true 00:27:54.299 08:24:27 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:27:54.299 08:24:27 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:54.299 08:24:27 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:54.299 08:24:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:54.299 08:24:27 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:54.558 08:24:27 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:54.558 08:24:27 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:54.558 08:24:27 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:54.558 08:24:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:54.558 08:24:27 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:27:54.558 08:24:27 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:27:54.558 08:24:27 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:27:54.558 08:24:27 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:27:54.558 08:24:27 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:54.558 08:24:27 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:54.558 08:24:27 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:54.558 08:24:27 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:27:54.558 08:24:27 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:27:54.558 08:24:27 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:27:54.558 08:24:27 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:54.558 08:24:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:54.558 08:24:27 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:54.558 08:24:27 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:54.558 08:24:27 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:27:54.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:54.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:27:54.558 00:27:54.558 --- 10.0.0.2 ping statistics --- 00:27:54.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.558 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:27:54.558 08:24:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:27:54.558 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:54.558 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:27:54.558 00:27:54.558 --- 10.0.0.3 ping statistics --- 00:27:54.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.558 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:27:54.558 08:24:27 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:54.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:54.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:27:54.558 00:27:54.558 --- 10.0.0.1 ping statistics --- 00:27:54.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.558 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:27:54.558 08:24:27 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:54.558 08:24:27 -- nvmf/common.sh@421 -- # return 0 00:27:54.558 08:24:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:54.558 08:24:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:54.558 08:24:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:54.558 08:24:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:54.558 08:24:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:54.558 08:24:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:54.558 08:24:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:54.558 08:24:27 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:27:54.558 08:24:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:54.558 08:24:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:54.558 08:24:27 -- common/autotest_common.sh@10 -- # set +x 00:27:54.558 ************************************ 00:27:54.558 START TEST nvmf_filesystem_no_in_capsule 00:27:54.558 ************************************ 00:27:54.558 08:24:27 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:27:54.558 08:24:27 -- target/filesystem.sh@47 -- # in_capsule=0 00:27:54.558 08:24:27 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:27:54.558 08:24:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:54.558 08:24:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:54.559 08:24:27 -- common/autotest_common.sh@10 -- # set +x 00:27:54.559 08:24:27 -- nvmf/common.sh@469 -- # nvmfpid=60594 00:27:54.559 08:24:27 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:54.559 08:24:27 -- nvmf/common.sh@470 -- # waitforlisten 60594 00:27:54.559 08:24:27 -- common/autotest_common.sh@819 -- # '[' -z 60594 ']' 00:27:54.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:54.559 08:24:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:54.559 08:24:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:54.559 08:24:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:54.559 08:24:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:54.559 08:24:27 -- common/autotest_common.sh@10 -- # set +x 00:27:54.559 [2024-04-17 08:24:27.862249] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:54.559 [2024-04-17 08:24:27.862463] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:54.817 [2024-04-17 08:24:27.995365] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:55.076 [2024-04-17 08:24:28.161908] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:55.076 [2024-04-17 08:24:28.162211] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:55.076 [2024-04-17 08:24:28.162258] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:55.076 [2024-04-17 08:24:28.162290] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:55.076 [2024-04-17 08:24:28.162429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:55.076 [2024-04-17 08:24:28.162491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:55.076 [2024-04-17 08:24:28.162577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.076 [2024-04-17 08:24:28.162577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:55.645 08:24:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:55.645 08:24:28 -- common/autotest_common.sh@852 -- # return 0 00:27:55.645 08:24:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:55.645 08:24:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:55.645 08:24:28 -- common/autotest_common.sh@10 -- # set +x 00:27:55.645 08:24:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:55.645 08:24:28 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:27:55.645 08:24:28 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:27:55.645 08:24:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:55.645 08:24:28 -- common/autotest_common.sh@10 -- # set +x 00:27:55.645 [2024-04-17 08:24:28.883137] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:55.645 08:24:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:55.645 08:24:28 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:27:55.645 08:24:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:55.645 08:24:28 -- common/autotest_common.sh@10 -- # set +x 00:27:55.905 Malloc1 00:27:55.905 08:24:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:55.905 08:24:29 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:55.905 08:24:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:55.905 08:24:29 -- common/autotest_common.sh@10 -- # set +x 00:27:55.905 08:24:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:55.905 08:24:29 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:55.905 08:24:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:55.905 08:24:29 -- common/autotest_common.sh@10 -- # set +x 00:27:55.905 08:24:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:55.905 08:24:29 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:55.905 08:24:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:55.905 08:24:29 -- common/autotest_common.sh@10 -- # set +x 00:27:55.905 [2024-04-17 08:24:29.153252] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:55.905 08:24:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:55.905 08:24:29 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:27:55.905 08:24:29 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:27:55.905 08:24:29 -- common/autotest_common.sh@1358 -- # local bdev_info 00:27:55.905 08:24:29 -- common/autotest_common.sh@1359 -- # local bs 00:27:55.905 08:24:29 -- common/autotest_common.sh@1360 -- # local nb 00:27:55.905 08:24:29 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:27:55.905 08:24:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:55.905 08:24:29 -- common/autotest_common.sh@10 -- # set +x 00:27:55.905 08:24:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:55.905 08:24:29 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:27:55.905 { 00:27:55.905 "aliases": [ 00:27:55.905 "b087543f-44b6-4d14-ae28-39f684f8adb2" 00:27:55.905 ], 00:27:55.905 "assigned_rate_limits": { 00:27:55.905 "r_mbytes_per_sec": 0, 00:27:55.905 "rw_ios_per_sec": 0, 00:27:55.905 "rw_mbytes_per_sec": 0, 00:27:55.905 "w_mbytes_per_sec": 0 00:27:55.905 }, 00:27:55.905 "block_size": 512, 00:27:55.905 "claim_type": "exclusive_write", 00:27:55.905 "claimed": true, 00:27:55.905 "driver_specific": {}, 00:27:55.905 "memory_domains": [ 00:27:55.905 { 00:27:55.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:55.905 "dma_device_type": 2 00:27:55.905 } 00:27:55.905 ], 00:27:55.905 "name": "Malloc1", 00:27:55.905 "num_blocks": 1048576, 00:27:55.905 "product_name": "Malloc disk", 00:27:55.905 "supported_io_types": { 00:27:55.905 "abort": true, 00:27:55.905 "compare": false, 00:27:55.905 "compare_and_write": false, 00:27:55.905 "flush": true, 00:27:55.906 "nvme_admin": false, 00:27:55.906 "nvme_io": false, 00:27:55.906 "read": true, 00:27:55.906 "reset": true, 00:27:55.906 "unmap": true, 00:27:55.906 "write": true, 00:27:55.906 "write_zeroes": true 00:27:55.906 }, 00:27:55.906 "uuid": "b087543f-44b6-4d14-ae28-39f684f8adb2", 00:27:55.906 "zoned": false 00:27:55.906 } 00:27:55.906 ]' 00:27:55.906 08:24:29 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:27:55.906 08:24:29 -- common/autotest_common.sh@1362 -- # bs=512 00:27:55.906 08:24:29 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:27:56.168 08:24:29 -- common/autotest_common.sh@1363 -- # nb=1048576 00:27:56.168 08:24:29 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:27:56.168 08:24:29 -- common/autotest_common.sh@1367 -- # echo 512 00:27:56.168 08:24:29 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:27:56.168 08:24:29 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:56.168 08:24:29 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:27:56.168 08:24:29 -- common/autotest_common.sh@1177 -- # local i=0 00:27:56.168 08:24:29 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:56.168 08:24:29 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:56.168 08:24:29 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:58.703 08:24:31 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:58.703 08:24:31 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:58.703 08:24:31 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:27:58.703 08:24:31 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:58.703 08:24:31 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:58.703 08:24:31 -- common/autotest_common.sh@1187 -- # return 0 00:27:58.703 08:24:31 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:27:58.703 08:24:31 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:27:58.703 08:24:31 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:27:58.703 08:24:31 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:27:58.703 08:24:31 -- setup/common.sh@76 -- # local dev=nvme0n1 00:27:58.703 08:24:31 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:58.703 08:24:31 -- setup/common.sh@80 -- # echo 536870912 00:27:58.703 08:24:31 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:27:58.703 08:24:31 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:27:58.703 08:24:31 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:27:58.703 08:24:31 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:27:58.703 08:24:31 -- target/filesystem.sh@69 -- # partprobe 00:27:58.703 08:24:31 -- target/filesystem.sh@70 -- # sleep 1 00:27:59.641 08:24:32 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:27:59.641 08:24:32 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:27:59.641 08:24:32 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:27:59.642 08:24:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:59.642 08:24:32 -- common/autotest_common.sh@10 -- # set +x 00:27:59.642 ************************************ 00:27:59.642 START TEST filesystem_ext4 00:27:59.642 ************************************ 00:27:59.642 08:24:32 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:27:59.642 08:24:32 -- target/filesystem.sh@18 -- # fstype=ext4 00:27:59.642 08:24:32 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:27:59.642 08:24:32 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:27:59.642 08:24:32 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:27:59.642 08:24:32 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:27:59.642 08:24:32 -- common/autotest_common.sh@904 -- # local i=0 00:27:59.642 08:24:32 -- common/autotest_common.sh@905 -- # local force 00:27:59.642 08:24:32 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:27:59.642 08:24:32 -- common/autotest_common.sh@908 -- # force=-F 00:27:59.642 08:24:32 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:27:59.642 mke2fs 1.46.5 (30-Dec-2021) 00:27:59.642 Discarding device blocks: 0/522240 done 00:27:59.642 Creating filesystem with 522240 1k blocks and 130560 inodes 00:27:59.642 Filesystem UUID: d04a0ea5-e79b-4ac0-9bcd-271101822491 00:27:59.642 Superblock backups stored on blocks: 00:27:59.642 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:27:59.642 00:27:59.642 Allocating group tables: 0/64 done 00:27:59.642 Writing inode tables: 0/64 done 00:27:59.642 Creating journal (8192 blocks): done 00:27:59.642 Writing superblocks and filesystem accounting information: 0/64 done 00:27:59.642 00:27:59.642 08:24:32 -- common/autotest_common.sh@921 -- # return 0 00:27:59.642 08:24:32 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:27:59.901 08:24:32 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:27:59.901 08:24:33 -- target/filesystem.sh@25 -- # sync 00:27:59.901 08:24:33 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:27:59.901 08:24:33 -- target/filesystem.sh@27 -- # sync 00:27:59.901 08:24:33 -- target/filesystem.sh@29 -- # i=0 00:27:59.901 08:24:33 -- target/filesystem.sh@30 -- # umount /mnt/device 00:27:59.901 08:24:33 -- target/filesystem.sh@37 -- # kill -0 60594 00:27:59.901 08:24:33 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:27:59.901 08:24:33 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:27:59.901 08:24:33 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:27:59.901 08:24:33 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:27:59.901 00:27:59.901 real 0m0.450s 00:27:59.901 user 0m0.028s 00:27:59.901 sys 0m0.073s 00:27:59.901 08:24:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:59.901 08:24:33 -- common/autotest_common.sh@10 -- # set +x 00:27:59.901 ************************************ 00:27:59.901 END TEST filesystem_ext4 00:27:59.901 ************************************ 00:27:59.901 08:24:33 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:27:59.901 08:24:33 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:27:59.901 08:24:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:59.901 08:24:33 -- common/autotest_common.sh@10 -- # set +x 00:27:59.901 ************************************ 00:27:59.901 START TEST filesystem_btrfs 00:27:59.901 ************************************ 00:27:59.901 08:24:33 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:27:59.901 08:24:33 -- target/filesystem.sh@18 -- # fstype=btrfs 00:27:59.901 08:24:33 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:27:59.901 08:24:33 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:27:59.901 08:24:33 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:27:59.901 08:24:33 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:27:59.901 08:24:33 -- common/autotest_common.sh@904 -- # local i=0 00:27:59.901 08:24:33 -- common/autotest_common.sh@905 -- # local force 00:27:59.901 08:24:33 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:27:59.901 08:24:33 -- common/autotest_common.sh@910 -- # force=-f 00:27:59.901 08:24:33 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:28:00.160 btrfs-progs v6.6.2 00:28:00.160 See https://btrfs.readthedocs.io for more information. 00:28:00.160 00:28:00.160 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:28:00.160 NOTE: several default settings have changed in version 5.15, please make sure 00:28:00.160 this does not affect your deployments: 00:28:00.160 - DUP for metadata (-m dup) 00:28:00.160 - enabled no-holes (-O no-holes) 00:28:00.160 - enabled free-space-tree (-R free-space-tree) 00:28:00.160 00:28:00.160 Label: (null) 00:28:00.160 UUID: 2545d5ec-7d16-4317-b6ca-1c322ee18e3c 00:28:00.160 Node size: 16384 00:28:00.160 Sector size: 4096 00:28:00.160 Filesystem size: 510.00MiB 00:28:00.160 Block group profiles: 00:28:00.160 Data: single 8.00MiB 00:28:00.160 Metadata: DUP 32.00MiB 00:28:00.160 System: DUP 8.00MiB 00:28:00.160 SSD detected: yes 00:28:00.160 Zoned device: no 00:28:00.160 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:28:00.160 Runtime features: free-space-tree 00:28:00.160 Checksum: crc32c 00:28:00.160 Number of devices: 1 00:28:00.161 Devices: 00:28:00.161 ID SIZE PATH 00:28:00.161 1 510.00MiB /dev/nvme0n1p1 00:28:00.161 00:28:00.161 08:24:33 -- common/autotest_common.sh@921 -- # return 0 00:28:00.161 08:24:33 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:28:00.161 08:24:33 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:28:00.161 08:24:33 -- target/filesystem.sh@25 -- # sync 00:28:00.161 08:24:33 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:28:00.161 08:24:33 -- target/filesystem.sh@27 -- # sync 00:28:00.161 08:24:33 -- target/filesystem.sh@29 -- # i=0 00:28:00.161 08:24:33 -- target/filesystem.sh@30 -- # umount /mnt/device 00:28:00.420 08:24:33 -- target/filesystem.sh@37 -- # kill -0 60594 00:28:00.420 08:24:33 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:28:00.420 08:24:33 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:28:00.420 08:24:33 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:28:00.420 08:24:33 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:28:00.420 ************************************ 00:28:00.420 END TEST filesystem_btrfs 00:28:00.420 ************************************ 00:28:00.420 00:28:00.420 real 0m0.324s 00:28:00.420 user 0m0.023s 00:28:00.420 sys 0m0.078s 00:28:00.420 08:24:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:00.420 08:24:33 -- common/autotest_common.sh@10 -- # set +x 00:28:00.420 08:24:33 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:28:00.420 08:24:33 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:28:00.420 08:24:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:00.420 08:24:33 -- common/autotest_common.sh@10 -- # set +x 00:28:00.420 ************************************ 00:28:00.420 START TEST filesystem_xfs 00:28:00.420 ************************************ 00:28:00.420 08:24:33 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:28:00.420 08:24:33 -- target/filesystem.sh@18 -- # fstype=xfs 00:28:00.420 08:24:33 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:28:00.420 08:24:33 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:28:00.420 08:24:33 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:28:00.420 08:24:33 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:28:00.420 08:24:33 -- common/autotest_common.sh@904 -- # local i=0 00:28:00.420 08:24:33 -- common/autotest_common.sh@905 -- # local force 00:28:00.420 08:24:33 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:28:00.420 08:24:33 -- common/autotest_common.sh@910 -- # force=-f 00:28:00.420 08:24:33 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:28:00.420 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:28:00.420 = sectsz=512 attr=2, projid32bit=1 00:28:00.420 = crc=1 finobt=1, sparse=1, rmapbt=0 00:28:00.420 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:28:00.420 data = bsize=4096 blocks=130560, imaxpct=25 00:28:00.420 = sunit=0 swidth=0 blks 00:28:00.420 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:28:00.420 log =internal log bsize=4096 blocks=16384, version=2 00:28:00.420 = sectsz=512 sunit=0 blks, lazy-count=1 00:28:00.420 realtime =none extsz=4096 blocks=0, rtextents=0 00:28:01.355 Discarding blocks...Done. 00:28:01.355 08:24:34 -- common/autotest_common.sh@921 -- # return 0 00:28:01.355 08:24:34 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:28:03.917 08:24:36 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:28:03.917 08:24:36 -- target/filesystem.sh@25 -- # sync 00:28:03.917 08:24:36 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:28:03.917 08:24:36 -- target/filesystem.sh@27 -- # sync 00:28:03.917 08:24:36 -- target/filesystem.sh@29 -- # i=0 00:28:03.917 08:24:36 -- target/filesystem.sh@30 -- # umount /mnt/device 00:28:03.917 08:24:36 -- target/filesystem.sh@37 -- # kill -0 60594 00:28:03.917 08:24:36 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:28:03.917 08:24:36 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:28:03.917 08:24:36 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:28:03.917 08:24:36 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:28:03.917 ************************************ 00:28:03.917 END TEST filesystem_xfs 00:28:03.917 ************************************ 00:28:03.917 00:28:03.917 real 0m3.102s 00:28:03.917 user 0m0.031s 00:28:03.917 sys 0m0.066s 00:28:03.917 08:24:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:03.917 08:24:36 -- common/autotest_common.sh@10 -- # set +x 00:28:03.917 08:24:36 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:28:03.917 08:24:36 -- target/filesystem.sh@93 -- # sync 00:28:03.917 08:24:36 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:03.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:03.917 08:24:36 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:03.917 08:24:36 -- common/autotest_common.sh@1198 -- # local i=0 00:28:03.917 08:24:36 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:28:03.917 08:24:36 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:03.917 08:24:36 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:03.917 08:24:36 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:03.917 08:24:36 -- common/autotest_common.sh@1210 -- # return 0 00:28:03.917 08:24:36 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:03.917 08:24:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:03.918 08:24:36 -- common/autotest_common.sh@10 -- # set +x 00:28:03.918 08:24:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:03.918 08:24:36 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:28:03.918 08:24:36 -- target/filesystem.sh@101 -- # killprocess 60594 00:28:03.918 08:24:36 -- common/autotest_common.sh@926 -- # '[' -z 60594 ']' 00:28:03.918 08:24:36 -- common/autotest_common.sh@930 -- # kill -0 60594 00:28:03.918 08:24:36 -- common/autotest_common.sh@931 -- # uname 00:28:03.918 08:24:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:03.918 08:24:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60594 00:28:03.918 08:24:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:03.918 08:24:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:03.918 08:24:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60594' 00:28:03.918 killing process with pid 60594 00:28:03.918 08:24:36 -- common/autotest_common.sh@945 -- # kill 60594 00:28:03.918 08:24:36 -- common/autotest_common.sh@950 -- # wait 60594 00:28:04.485 08:24:37 -- target/filesystem.sh@102 -- # nvmfpid= 00:28:04.485 ************************************ 00:28:04.485 END TEST nvmf_filesystem_no_in_capsule 00:28:04.485 ************************************ 00:28:04.485 00:28:04.485 real 0m9.743s 00:28:04.485 user 0m37.044s 00:28:04.485 sys 0m1.339s 00:28:04.485 08:24:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:04.485 08:24:37 -- common/autotest_common.sh@10 -- # set +x 00:28:04.485 08:24:37 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:28:04.485 08:24:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:04.485 08:24:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:04.485 08:24:37 -- common/autotest_common.sh@10 -- # set +x 00:28:04.485 ************************************ 00:28:04.485 START TEST nvmf_filesystem_in_capsule 00:28:04.485 ************************************ 00:28:04.485 08:24:37 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:28:04.485 08:24:37 -- target/filesystem.sh@47 -- # in_capsule=4096 00:28:04.485 08:24:37 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:28:04.485 08:24:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:04.485 08:24:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:04.485 08:24:37 -- common/autotest_common.sh@10 -- # set +x 00:28:04.485 08:24:37 -- nvmf/common.sh@469 -- # nvmfpid=60907 00:28:04.485 08:24:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:04.485 08:24:37 -- nvmf/common.sh@470 -- # waitforlisten 60907 00:28:04.485 08:24:37 -- common/autotest_common.sh@819 -- # '[' -z 60907 ']' 00:28:04.485 08:24:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:04.485 08:24:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:04.485 08:24:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:04.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:04.485 08:24:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:04.485 08:24:37 -- common/autotest_common.sh@10 -- # set +x 00:28:04.485 [2024-04-17 08:24:37.675813] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:04.485 [2024-04-17 08:24:37.675959] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:04.743 [2024-04-17 08:24:37.819437] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:04.743 [2024-04-17 08:24:37.920688] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:04.743 [2024-04-17 08:24:37.920807] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:04.743 [2024-04-17 08:24:37.920815] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:04.743 [2024-04-17 08:24:37.920820] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:04.743 [2024-04-17 08:24:37.921053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:04.743 [2024-04-17 08:24:37.921226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:04.743 [2024-04-17 08:24:37.921362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:04.743 [2024-04-17 08:24:37.921361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:05.308 08:24:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:05.308 08:24:38 -- common/autotest_common.sh@852 -- # return 0 00:28:05.308 08:24:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:05.308 08:24:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:05.308 08:24:38 -- common/autotest_common.sh@10 -- # set +x 00:28:05.308 08:24:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:05.308 08:24:38 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:28:05.308 08:24:38 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:28:05.308 08:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:05.308 08:24:38 -- common/autotest_common.sh@10 -- # set +x 00:28:05.309 [2024-04-17 08:24:38.619472] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:05.567 08:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:05.567 08:24:38 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:28:05.567 08:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:05.567 08:24:38 -- common/autotest_common.sh@10 -- # set +x 00:28:05.567 Malloc1 00:28:05.567 08:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:05.567 08:24:38 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:28:05.568 08:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:05.568 08:24:38 -- common/autotest_common.sh@10 -- # set +x 00:28:05.568 08:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:05.568 08:24:38 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:05.568 08:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:05.568 08:24:38 -- common/autotest_common.sh@10 -- # set +x 00:28:05.568 08:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:05.568 08:24:38 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:05.568 08:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:05.568 08:24:38 -- common/autotest_common.sh@10 -- # set +x 00:28:05.568 [2024-04-17 08:24:38.798152] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:05.568 08:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:05.568 08:24:38 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:28:05.568 08:24:38 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:28:05.568 08:24:38 -- common/autotest_common.sh@1358 -- # local bdev_info 00:28:05.568 08:24:38 -- common/autotest_common.sh@1359 -- # local bs 00:28:05.568 08:24:38 -- common/autotest_common.sh@1360 -- # local nb 00:28:05.568 08:24:38 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:28:05.568 08:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:05.568 08:24:38 -- common/autotest_common.sh@10 -- # set +x 00:28:05.568 08:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:05.568 08:24:38 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:28:05.568 { 00:28:05.568 "aliases": [ 00:28:05.568 "80a838be-af21-4a6f-a601-2c41ec2b4d0f" 00:28:05.568 ], 00:28:05.568 "assigned_rate_limits": { 00:28:05.568 "r_mbytes_per_sec": 0, 00:28:05.568 "rw_ios_per_sec": 0, 00:28:05.568 "rw_mbytes_per_sec": 0, 00:28:05.568 "w_mbytes_per_sec": 0 00:28:05.568 }, 00:28:05.568 "block_size": 512, 00:28:05.568 "claim_type": "exclusive_write", 00:28:05.568 "claimed": true, 00:28:05.568 "driver_specific": {}, 00:28:05.568 "memory_domains": [ 00:28:05.568 { 00:28:05.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:05.568 "dma_device_type": 2 00:28:05.568 } 00:28:05.568 ], 00:28:05.568 "name": "Malloc1", 00:28:05.568 "num_blocks": 1048576, 00:28:05.568 "product_name": "Malloc disk", 00:28:05.568 "supported_io_types": { 00:28:05.568 "abort": true, 00:28:05.568 "compare": false, 00:28:05.568 "compare_and_write": false, 00:28:05.568 "flush": true, 00:28:05.568 "nvme_admin": false, 00:28:05.568 "nvme_io": false, 00:28:05.568 "read": true, 00:28:05.568 "reset": true, 00:28:05.568 "unmap": true, 00:28:05.568 "write": true, 00:28:05.568 "write_zeroes": true 00:28:05.568 }, 00:28:05.568 "uuid": "80a838be-af21-4a6f-a601-2c41ec2b4d0f", 00:28:05.568 "zoned": false 00:28:05.568 } 00:28:05.568 ]' 00:28:05.568 08:24:38 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:28:05.568 08:24:38 -- common/autotest_common.sh@1362 -- # bs=512 00:28:05.568 08:24:38 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:28:05.828 08:24:38 -- common/autotest_common.sh@1363 -- # nb=1048576 00:28:05.828 08:24:38 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:28:05.828 08:24:38 -- common/autotest_common.sh@1367 -- # echo 512 00:28:05.828 08:24:38 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:28:05.828 08:24:38 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:28:05.828 08:24:39 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:28:05.828 08:24:39 -- common/autotest_common.sh@1177 -- # local i=0 00:28:05.828 08:24:39 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:28:05.828 08:24:39 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:28:05.828 08:24:39 -- common/autotest_common.sh@1184 -- # sleep 2 00:28:08.361 08:24:41 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:28:08.361 08:24:41 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:28:08.361 08:24:41 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:28:08.361 08:24:41 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:28:08.361 08:24:41 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:28:08.361 08:24:41 -- common/autotest_common.sh@1187 -- # return 0 00:28:08.361 08:24:41 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:28:08.361 08:24:41 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:28:08.361 08:24:41 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:28:08.361 08:24:41 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:28:08.361 08:24:41 -- setup/common.sh@76 -- # local dev=nvme0n1 00:28:08.361 08:24:41 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:08.361 08:24:41 -- setup/common.sh@80 -- # echo 536870912 00:28:08.361 08:24:41 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:28:08.361 08:24:41 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:28:08.361 08:24:41 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:28:08.361 08:24:41 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:28:08.361 08:24:41 -- target/filesystem.sh@69 -- # partprobe 00:28:08.361 08:24:41 -- target/filesystem.sh@70 -- # sleep 1 00:28:08.928 08:24:42 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:28:08.928 08:24:42 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:28:08.928 08:24:42 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:28:08.928 08:24:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:08.928 08:24:42 -- common/autotest_common.sh@10 -- # set +x 00:28:08.928 ************************************ 00:28:08.928 START TEST filesystem_in_capsule_ext4 00:28:08.928 ************************************ 00:28:08.928 08:24:42 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:28:08.928 08:24:42 -- target/filesystem.sh@18 -- # fstype=ext4 00:28:08.928 08:24:42 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:28:08.928 08:24:42 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:28:08.928 08:24:42 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:28:08.928 08:24:42 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:28:08.928 08:24:42 -- common/autotest_common.sh@904 -- # local i=0 00:28:08.928 08:24:42 -- common/autotest_common.sh@905 -- # local force 00:28:08.928 08:24:42 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:28:08.928 08:24:42 -- common/autotest_common.sh@908 -- # force=-F 00:28:08.928 08:24:42 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:28:09.187 mke2fs 1.46.5 (30-Dec-2021) 00:28:09.187 Discarding device blocks: 0/522240 done 00:28:09.187 Creating filesystem with 522240 1k blocks and 130560 inodes 00:28:09.187 Filesystem UUID: 60bd9d02-08ba-4f80-9ab5-0a22fe652b08 00:28:09.187 Superblock backups stored on blocks: 00:28:09.187 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:28:09.187 00:28:09.187 Allocating group tables: 0/64 done 00:28:09.187 Writing inode tables: 0/64 done 00:28:09.188 Creating journal (8192 blocks): done 00:28:09.188 Writing superblocks and filesystem accounting information: 0/64 done 00:28:09.188 00:28:09.188 08:24:42 -- common/autotest_common.sh@921 -- # return 0 00:28:09.188 08:24:42 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:28:09.188 08:24:42 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:28:09.446 08:24:42 -- target/filesystem.sh@25 -- # sync 00:28:09.446 08:24:42 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:28:09.446 08:24:42 -- target/filesystem.sh@27 -- # sync 00:28:09.446 08:24:42 -- target/filesystem.sh@29 -- # i=0 00:28:09.446 08:24:42 -- target/filesystem.sh@30 -- # umount /mnt/device 00:28:09.446 08:24:42 -- target/filesystem.sh@37 -- # kill -0 60907 00:28:09.446 08:24:42 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:28:09.446 08:24:42 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:28:09.706 08:24:42 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:28:09.706 08:24:42 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:28:09.706 ************************************ 00:28:09.706 END TEST filesystem_in_capsule_ext4 00:28:09.706 ************************************ 00:28:09.706 00:28:09.706 real 0m0.548s 00:28:09.706 user 0m0.026s 00:28:09.706 sys 0m0.087s 00:28:09.706 08:24:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:09.706 08:24:42 -- common/autotest_common.sh@10 -- # set +x 00:28:09.706 08:24:42 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:28:09.706 08:24:42 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:28:09.706 08:24:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:09.706 08:24:42 -- common/autotest_common.sh@10 -- # set +x 00:28:09.706 ************************************ 00:28:09.706 START TEST filesystem_in_capsule_btrfs 00:28:09.706 ************************************ 00:28:09.706 08:24:42 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:28:09.706 08:24:42 -- target/filesystem.sh@18 -- # fstype=btrfs 00:28:09.706 08:24:42 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:28:09.706 08:24:42 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:28:09.706 08:24:42 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:28:09.706 08:24:42 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:28:09.706 08:24:42 -- common/autotest_common.sh@904 -- # local i=0 00:28:09.706 08:24:42 -- common/autotest_common.sh@905 -- # local force 00:28:09.706 08:24:42 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:28:09.706 08:24:42 -- common/autotest_common.sh@910 -- # force=-f 00:28:09.706 08:24:42 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:28:10.003 btrfs-progs v6.6.2 00:28:10.003 See https://btrfs.readthedocs.io for more information. 00:28:10.003 00:28:10.003 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:28:10.003 NOTE: several default settings have changed in version 5.15, please make sure 00:28:10.003 this does not affect your deployments: 00:28:10.003 - DUP for metadata (-m dup) 00:28:10.003 - enabled no-holes (-O no-holes) 00:28:10.003 - enabled free-space-tree (-R free-space-tree) 00:28:10.003 00:28:10.003 Label: (null) 00:28:10.003 UUID: c13f7018-029f-4371-9e42-feddf710c1a9 00:28:10.003 Node size: 16384 00:28:10.003 Sector size: 4096 00:28:10.003 Filesystem size: 510.00MiB 00:28:10.003 Block group profiles: 00:28:10.003 Data: single 8.00MiB 00:28:10.003 Metadata: DUP 32.00MiB 00:28:10.003 System: DUP 8.00MiB 00:28:10.003 SSD detected: yes 00:28:10.003 Zoned device: no 00:28:10.003 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:28:10.003 Runtime features: free-space-tree 00:28:10.003 Checksum: crc32c 00:28:10.003 Number of devices: 1 00:28:10.003 Devices: 00:28:10.003 ID SIZE PATH 00:28:10.003 1 510.00MiB /dev/nvme0n1p1 00:28:10.003 00:28:10.003 08:24:43 -- common/autotest_common.sh@921 -- # return 0 00:28:10.003 08:24:43 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:28:10.003 08:24:43 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:28:10.003 08:24:43 -- target/filesystem.sh@25 -- # sync 00:28:10.003 08:24:43 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:28:10.003 08:24:43 -- target/filesystem.sh@27 -- # sync 00:28:10.003 08:24:43 -- target/filesystem.sh@29 -- # i=0 00:28:10.003 08:24:43 -- target/filesystem.sh@30 -- # umount /mnt/device 00:28:10.003 08:24:43 -- target/filesystem.sh@37 -- # kill -0 60907 00:28:10.003 08:24:43 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:28:10.003 08:24:43 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:28:10.003 08:24:43 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:28:10.003 08:24:43 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:28:10.003 ************************************ 00:28:10.003 END TEST filesystem_in_capsule_btrfs 00:28:10.003 ************************************ 00:28:10.003 00:28:10.003 real 0m0.291s 00:28:10.003 user 0m0.018s 00:28:10.003 sys 0m0.081s 00:28:10.003 08:24:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:10.003 08:24:43 -- common/autotest_common.sh@10 -- # set +x 00:28:10.003 08:24:43 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:28:10.003 08:24:43 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:28:10.003 08:24:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:10.003 08:24:43 -- common/autotest_common.sh@10 -- # set +x 00:28:10.003 ************************************ 00:28:10.003 START TEST filesystem_in_capsule_xfs 00:28:10.003 ************************************ 00:28:10.003 08:24:43 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:28:10.003 08:24:43 -- target/filesystem.sh@18 -- # fstype=xfs 00:28:10.003 08:24:43 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:28:10.003 08:24:43 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:28:10.003 08:24:43 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:28:10.003 08:24:43 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:28:10.003 08:24:43 -- common/autotest_common.sh@904 -- # local i=0 00:28:10.003 08:24:43 -- common/autotest_common.sh@905 -- # local force 00:28:10.003 08:24:43 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:28:10.003 08:24:43 -- common/autotest_common.sh@910 -- # force=-f 00:28:10.003 08:24:43 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:28:10.293 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:28:10.293 = sectsz=512 attr=2, projid32bit=1 00:28:10.293 = crc=1 finobt=1, sparse=1, rmapbt=0 00:28:10.293 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:28:10.293 data = bsize=4096 blocks=130560, imaxpct=25 00:28:10.293 = sunit=0 swidth=0 blks 00:28:10.293 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:28:10.293 log =internal log bsize=4096 blocks=16384, version=2 00:28:10.293 = sectsz=512 sunit=0 blks, lazy-count=1 00:28:10.293 realtime =none extsz=4096 blocks=0, rtextents=0 00:28:10.861 Discarding blocks...Done. 00:28:10.861 08:24:44 -- common/autotest_common.sh@921 -- # return 0 00:28:10.861 08:24:44 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:28:12.790 08:24:45 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:28:12.790 08:24:45 -- target/filesystem.sh@25 -- # sync 00:28:12.790 08:24:45 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:28:12.790 08:24:45 -- target/filesystem.sh@27 -- # sync 00:28:12.790 08:24:45 -- target/filesystem.sh@29 -- # i=0 00:28:12.790 08:24:45 -- target/filesystem.sh@30 -- # umount /mnt/device 00:28:12.790 08:24:45 -- target/filesystem.sh@37 -- # kill -0 60907 00:28:12.790 08:24:45 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:28:12.790 08:24:45 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:28:12.790 08:24:45 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:28:12.790 08:24:45 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:28:12.790 ************************************ 00:28:12.790 END TEST filesystem_in_capsule_xfs 00:28:12.790 ************************************ 00:28:12.790 00:28:12.790 real 0m2.719s 00:28:12.790 user 0m0.029s 00:28:12.790 sys 0m0.081s 00:28:12.790 08:24:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:12.790 08:24:45 -- common/autotest_common.sh@10 -- # set +x 00:28:12.790 08:24:45 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:28:12.790 08:24:46 -- target/filesystem.sh@93 -- # sync 00:28:12.790 08:24:46 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:12.790 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:12.790 08:24:46 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:12.790 08:24:46 -- common/autotest_common.sh@1198 -- # local i=0 00:28:12.790 08:24:46 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:28:12.790 08:24:46 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:12.790 08:24:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:12.790 08:24:46 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:12.790 08:24:46 -- common/autotest_common.sh@1210 -- # return 0 00:28:12.790 08:24:46 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:12.790 08:24:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:12.790 08:24:46 -- common/autotest_common.sh@10 -- # set +x 00:28:12.790 08:24:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:12.790 08:24:46 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:28:12.790 08:24:46 -- target/filesystem.sh@101 -- # killprocess 60907 00:28:12.790 08:24:46 -- common/autotest_common.sh@926 -- # '[' -z 60907 ']' 00:28:12.790 08:24:46 -- common/autotest_common.sh@930 -- # kill -0 60907 00:28:12.790 08:24:46 -- common/autotest_common.sh@931 -- # uname 00:28:13.049 08:24:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:13.049 08:24:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60907 00:28:13.049 killing process with pid 60907 00:28:13.049 08:24:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:13.049 08:24:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:13.049 08:24:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60907' 00:28:13.049 08:24:46 -- common/autotest_common.sh@945 -- # kill 60907 00:28:13.049 08:24:46 -- common/autotest_common.sh@950 -- # wait 60907 00:28:13.618 08:24:46 -- target/filesystem.sh@102 -- # nvmfpid= 00:28:13.618 00:28:13.618 real 0m9.205s 00:28:13.618 user 0m35.233s 00:28:13.618 sys 0m1.145s 00:28:13.618 08:24:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:13.618 08:24:46 -- common/autotest_common.sh@10 -- # set +x 00:28:13.618 ************************************ 00:28:13.618 END TEST nvmf_filesystem_in_capsule 00:28:13.618 ************************************ 00:28:13.618 08:24:46 -- target/filesystem.sh@108 -- # nvmftestfini 00:28:13.618 08:24:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:13.618 08:24:46 -- nvmf/common.sh@116 -- # sync 00:28:13.618 08:24:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:13.618 08:24:46 -- nvmf/common.sh@119 -- # set +e 00:28:13.618 08:24:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:13.618 08:24:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:13.618 rmmod nvme_tcp 00:28:13.618 rmmod nvme_fabrics 00:28:13.877 rmmod nvme_keyring 00:28:13.877 08:24:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:13.877 08:24:46 -- nvmf/common.sh@123 -- # set -e 00:28:13.877 08:24:46 -- nvmf/common.sh@124 -- # return 0 00:28:13.877 08:24:46 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:28:13.877 08:24:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:13.877 08:24:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:13.877 08:24:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:13.877 08:24:46 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:13.877 08:24:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:13.877 08:24:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:13.877 08:24:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:13.877 08:24:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.877 08:24:47 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:28:13.877 00:28:13.877 real 0m19.868s 00:28:13.877 user 1m12.524s 00:28:13.877 sys 0m2.972s 00:28:13.877 08:24:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:13.877 08:24:47 -- common/autotest_common.sh@10 -- # set +x 00:28:13.877 ************************************ 00:28:13.877 END TEST nvmf_filesystem 00:28:13.877 ************************************ 00:28:13.877 08:24:47 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:28:13.877 08:24:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:13.877 08:24:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:13.877 08:24:47 -- common/autotest_common.sh@10 -- # set +x 00:28:13.877 ************************************ 00:28:13.877 START TEST nvmf_discovery 00:28:13.877 ************************************ 00:28:13.877 08:24:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:28:13.877 * Looking for test storage... 00:28:14.177 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:14.177 08:24:47 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:14.177 08:24:47 -- nvmf/common.sh@7 -- # uname -s 00:28:14.177 08:24:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:14.177 08:24:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:14.177 08:24:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:14.177 08:24:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:14.177 08:24:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:14.177 08:24:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:14.177 08:24:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:14.177 08:24:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:14.177 08:24:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:14.177 08:24:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:14.177 08:24:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:28:14.177 08:24:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:28:14.177 08:24:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:14.177 08:24:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:14.177 08:24:47 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:14.177 08:24:47 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:14.177 08:24:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:14.177 08:24:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:14.177 08:24:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:14.177 08:24:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.177 08:24:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.177 08:24:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.177 08:24:47 -- paths/export.sh@5 -- # export PATH 00:28:14.177 08:24:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.177 08:24:47 -- nvmf/common.sh@46 -- # : 0 00:28:14.177 08:24:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:14.177 08:24:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:14.177 08:24:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:14.177 08:24:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:14.177 08:24:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:14.177 08:24:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:14.177 08:24:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:14.177 08:24:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:14.177 08:24:47 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:28:14.177 08:24:47 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:28:14.177 08:24:47 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:28:14.177 08:24:47 -- target/discovery.sh@15 -- # hash nvme 00:28:14.177 08:24:47 -- target/discovery.sh@20 -- # nvmftestinit 00:28:14.177 08:24:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:14.177 08:24:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:14.177 08:24:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:14.177 08:24:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:14.177 08:24:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:14.177 08:24:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.177 08:24:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:14.177 08:24:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:14.177 08:24:47 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:28:14.177 08:24:47 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:28:14.177 08:24:47 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:28:14.177 08:24:47 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:28:14.177 08:24:47 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:28:14.177 08:24:47 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:28:14.177 08:24:47 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:14.177 08:24:47 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:14.177 08:24:47 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:14.177 08:24:47 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:28:14.177 08:24:47 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:14.177 08:24:47 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:14.177 08:24:47 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:14.177 08:24:47 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:14.178 08:24:47 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:14.178 08:24:47 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:14.178 08:24:47 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:14.178 08:24:47 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:14.178 08:24:47 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:28:14.178 08:24:47 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:28:14.178 Cannot find device "nvmf_tgt_br" 00:28:14.178 08:24:47 -- nvmf/common.sh@154 -- # true 00:28:14.178 08:24:47 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:28:14.178 Cannot find device "nvmf_tgt_br2" 00:28:14.178 08:24:47 -- nvmf/common.sh@155 -- # true 00:28:14.178 08:24:47 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:28:14.178 08:24:47 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:28:14.178 Cannot find device "nvmf_tgt_br" 00:28:14.178 08:24:47 -- nvmf/common.sh@157 -- # true 00:28:14.178 08:24:47 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:28:14.178 Cannot find device "nvmf_tgt_br2" 00:28:14.178 08:24:47 -- nvmf/common.sh@158 -- # true 00:28:14.178 08:24:47 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:28:14.178 08:24:47 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:28:14.178 08:24:47 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:14.178 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:14.178 08:24:47 -- nvmf/common.sh@161 -- # true 00:28:14.178 08:24:47 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:14.178 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:14.178 08:24:47 -- nvmf/common.sh@162 -- # true 00:28:14.178 08:24:47 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:28:14.178 08:24:47 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:14.178 08:24:47 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:14.178 08:24:47 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:14.178 08:24:47 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:14.178 08:24:47 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:14.459 08:24:47 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:14.459 08:24:47 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:14.459 08:24:47 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:14.459 08:24:47 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:28:14.459 08:24:47 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:28:14.459 08:24:47 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:28:14.459 08:24:47 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:28:14.459 08:24:47 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:14.459 08:24:47 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:14.459 08:24:47 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:14.459 08:24:47 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:28:14.459 08:24:47 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:28:14.459 08:24:47 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:28:14.459 08:24:47 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:14.459 08:24:47 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:14.459 08:24:47 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:14.459 08:24:47 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:14.459 08:24:47 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:28:14.459 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:14.459 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:28:14.459 00:28:14.459 --- 10.0.0.2 ping statistics --- 00:28:14.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.459 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:28:14.459 08:24:47 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:28:14.459 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:14.459 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:28:14.459 00:28:14.459 --- 10.0.0.3 ping statistics --- 00:28:14.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.459 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:28:14.459 08:24:47 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:14.459 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:14.459 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:28:14.459 00:28:14.459 --- 10.0.0.1 ping statistics --- 00:28:14.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.459 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:28:14.459 08:24:47 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:14.459 08:24:47 -- nvmf/common.sh@421 -- # return 0 00:28:14.459 08:24:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:14.459 08:24:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:14.459 08:24:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:14.459 08:24:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:14.459 08:24:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:14.459 08:24:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:14.459 08:24:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:14.459 08:24:47 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:28:14.459 08:24:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:14.459 08:24:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:14.459 08:24:47 -- common/autotest_common.sh@10 -- # set +x 00:28:14.459 08:24:47 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:14.459 08:24:47 -- nvmf/common.sh@469 -- # nvmfpid=61364 00:28:14.459 08:24:47 -- nvmf/common.sh@470 -- # waitforlisten 61364 00:28:14.459 08:24:47 -- common/autotest_common.sh@819 -- # '[' -z 61364 ']' 00:28:14.459 08:24:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:14.459 08:24:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:14.459 08:24:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:14.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:14.459 08:24:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:14.459 08:24:47 -- common/autotest_common.sh@10 -- # set +x 00:28:14.459 [2024-04-17 08:24:47.645154] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:14.459 [2024-04-17 08:24:47.645211] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:14.459 [2024-04-17 08:24:47.786601] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:14.718 [2024-04-17 08:24:47.939159] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:14.718 [2024-04-17 08:24:47.939331] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:14.718 [2024-04-17 08:24:47.939339] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:14.718 [2024-04-17 08:24:47.939346] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:14.718 [2024-04-17 08:24:47.939642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:14.718 [2024-04-17 08:24:47.939787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:14.718 [2024-04-17 08:24:47.939982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.718 [2024-04-17 08:24:47.939988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:15.286 08:24:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:15.286 08:24:48 -- common/autotest_common.sh@852 -- # return 0 00:28:15.286 08:24:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:15.286 08:24:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:15.286 08:24:48 -- common/autotest_common.sh@10 -- # set +x 00:28:15.286 08:24:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:15.286 08:24:48 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:15.286 08:24:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.286 08:24:48 -- common/autotest_common.sh@10 -- # set +x 00:28:15.286 [2024-04-17 08:24:48.610617] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:15.547 08:24:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.547 08:24:48 -- target/discovery.sh@26 -- # seq 1 4 00:28:15.547 08:24:48 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:28:15.547 08:24:48 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:28:15.547 08:24:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.547 08:24:48 -- common/autotest_common.sh@10 -- # set +x 00:28:15.547 Null1 00:28:15.547 08:24:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.547 08:24:48 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:15.547 08:24:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.547 08:24:48 -- common/autotest_common.sh@10 -- # set +x 00:28:15.547 08:24:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.547 08:24:48 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:28:15.547 08:24:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.547 08:24:48 -- common/autotest_common.sh@10 -- # set +x 00:28:15.547 08:24:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.547 08:24:48 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:15.547 08:24:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.547 08:24:48 -- common/autotest_common.sh@10 -- # set +x 00:28:15.547 [2024-04-17 08:24:48.677601] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:15.547 08:24:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.547 08:24:48 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:28:15.547 08:24:48 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:28:15.547 08:24:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.547 08:24:48 -- common/autotest_common.sh@10 -- # set +x 00:28:15.547 Null2 00:28:15.547 08:24:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.547 08:24:48 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:28:15.547 08:24:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.547 08:24:48 -- common/autotest_common.sh@10 -- # set +x 00:28:15.547 08:24:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.547 08:24:48 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:28:15.547 08:24:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.547 08:24:48 -- common/autotest_common.sh@10 -- # set +x 00:28:15.547 08:24:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.547 08:24:48 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:15.547 08:24:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.547 08:24:48 -- common/autotest_common.sh@10 -- # set +x 00:28:15.547 08:24:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.547 08:24:48 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:28:15.547 08:24:48 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:28:15.547 08:24:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.547 08:24:48 -- common/autotest_common.sh@10 -- # set +x 00:28:15.547 Null3 00:28:15.547 08:24:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.547 08:24:48 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:28:15.547 08:24:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.547 08:24:48 -- common/autotest_common.sh@10 -- # set +x 00:28:15.547 08:24:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.547 08:24:48 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:28:15.547 08:24:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.547 08:24:48 -- common/autotest_common.sh@10 -- # set +x 00:28:15.547 08:24:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.547 08:24:48 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:28:15.547 08:24:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.547 08:24:48 -- common/autotest_common.sh@10 -- # set +x 00:28:15.547 08:24:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.547 08:24:48 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:28:15.547 08:24:48 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:28:15.547 08:24:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.547 08:24:48 -- common/autotest_common.sh@10 -- # set +x 00:28:15.547 Null4 00:28:15.547 08:24:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.547 08:24:48 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:28:15.547 08:24:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.547 08:24:48 -- common/autotest_common.sh@10 -- # set +x 00:28:15.547 08:24:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.547 08:24:48 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:28:15.547 08:24:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.547 08:24:48 -- common/autotest_common.sh@10 -- # set +x 00:28:15.547 08:24:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.547 08:24:48 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:28:15.547 08:24:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.547 08:24:48 -- common/autotest_common.sh@10 -- # set +x 00:28:15.547 08:24:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.547 08:24:48 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:15.547 08:24:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.547 08:24:48 -- common/autotest_common.sh@10 -- # set +x 00:28:15.547 08:24:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.547 08:24:48 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:28:15.547 08:24:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.547 08:24:48 -- common/autotest_common.sh@10 -- # set +x 00:28:15.547 08:24:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.547 08:24:48 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -a 10.0.0.2 -s 4420 00:28:15.807 00:28:15.807 Discovery Log Number of Records 6, Generation counter 6 00:28:15.807 =====Discovery Log Entry 0====== 00:28:15.807 trtype: tcp 00:28:15.807 adrfam: ipv4 00:28:15.807 subtype: current discovery subsystem 00:28:15.807 treq: not required 00:28:15.807 portid: 0 00:28:15.807 trsvcid: 4420 00:28:15.807 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:15.807 traddr: 10.0.0.2 00:28:15.807 eflags: explicit discovery connections, duplicate discovery information 00:28:15.807 sectype: none 00:28:15.807 =====Discovery Log Entry 1====== 00:28:15.807 trtype: tcp 00:28:15.807 adrfam: ipv4 00:28:15.807 subtype: nvme subsystem 00:28:15.807 treq: not required 00:28:15.807 portid: 0 00:28:15.807 trsvcid: 4420 00:28:15.807 subnqn: nqn.2016-06.io.spdk:cnode1 00:28:15.807 traddr: 10.0.0.2 00:28:15.807 eflags: none 00:28:15.807 sectype: none 00:28:15.807 =====Discovery Log Entry 2====== 00:28:15.807 trtype: tcp 00:28:15.807 adrfam: ipv4 00:28:15.807 subtype: nvme subsystem 00:28:15.807 treq: not required 00:28:15.807 portid: 0 00:28:15.807 trsvcid: 4420 00:28:15.807 subnqn: nqn.2016-06.io.spdk:cnode2 00:28:15.807 traddr: 10.0.0.2 00:28:15.807 eflags: none 00:28:15.807 sectype: none 00:28:15.807 =====Discovery Log Entry 3====== 00:28:15.807 trtype: tcp 00:28:15.807 adrfam: ipv4 00:28:15.807 subtype: nvme subsystem 00:28:15.807 treq: not required 00:28:15.807 portid: 0 00:28:15.807 trsvcid: 4420 00:28:15.807 subnqn: nqn.2016-06.io.spdk:cnode3 00:28:15.807 traddr: 10.0.0.2 00:28:15.807 eflags: none 00:28:15.807 sectype: none 00:28:15.807 =====Discovery Log Entry 4====== 00:28:15.807 trtype: tcp 00:28:15.807 adrfam: ipv4 00:28:15.807 subtype: nvme subsystem 00:28:15.807 treq: not required 00:28:15.807 portid: 0 00:28:15.807 trsvcid: 4420 00:28:15.807 subnqn: nqn.2016-06.io.spdk:cnode4 00:28:15.807 traddr: 10.0.0.2 00:28:15.807 eflags: none 00:28:15.807 sectype: none 00:28:15.808 =====Discovery Log Entry 5====== 00:28:15.808 trtype: tcp 00:28:15.808 adrfam: ipv4 00:28:15.808 subtype: discovery subsystem referral 00:28:15.808 treq: not required 00:28:15.808 portid: 0 00:28:15.808 trsvcid: 4430 00:28:15.808 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:15.808 traddr: 10.0.0.2 00:28:15.808 eflags: none 00:28:15.808 sectype: none 00:28:15.808 Perform nvmf subsystem discovery via RPC 00:28:15.808 08:24:48 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:28:15.808 08:24:48 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:28:15.808 08:24:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.808 08:24:48 -- common/autotest_common.sh@10 -- # set +x 00:28:15.808 [2024-04-17 08:24:48.933224] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:28:15.808 [ 00:28:15.808 { 00:28:15.808 "allow_any_host": true, 00:28:15.808 "hosts": [], 00:28:15.808 "listen_addresses": [ 00:28:15.808 { 00:28:15.808 "adrfam": "IPv4", 00:28:15.808 "traddr": "10.0.0.2", 00:28:15.808 "transport": "TCP", 00:28:15.808 "trsvcid": "4420", 00:28:15.808 "trtype": "TCP" 00:28:15.808 } 00:28:15.808 ], 00:28:15.808 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:15.808 "subtype": "Discovery" 00:28:15.808 }, 00:28:15.808 { 00:28:15.808 "allow_any_host": true, 00:28:15.808 "hosts": [], 00:28:15.808 "listen_addresses": [ 00:28:15.808 { 00:28:15.808 "adrfam": "IPv4", 00:28:15.808 "traddr": "10.0.0.2", 00:28:15.808 "transport": "TCP", 00:28:15.808 "trsvcid": "4420", 00:28:15.808 "trtype": "TCP" 00:28:15.808 } 00:28:15.808 ], 00:28:15.808 "max_cntlid": 65519, 00:28:15.808 "max_namespaces": 32, 00:28:15.808 "min_cntlid": 1, 00:28:15.808 "model_number": "SPDK bdev Controller", 00:28:15.808 "namespaces": [ 00:28:15.808 { 00:28:15.808 "bdev_name": "Null1", 00:28:15.808 "name": "Null1", 00:28:15.808 "nguid": "4D840F3C51FD4AEA85E8D3760E4D9266", 00:28:15.808 "nsid": 1, 00:28:15.808 "uuid": "4d840f3c-51fd-4aea-85e8-d3760e4d9266" 00:28:15.808 } 00:28:15.808 ], 00:28:15.808 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:15.808 "serial_number": "SPDK00000000000001", 00:28:15.808 "subtype": "NVMe" 00:28:15.808 }, 00:28:15.808 { 00:28:15.808 "allow_any_host": true, 00:28:15.808 "hosts": [], 00:28:15.808 "listen_addresses": [ 00:28:15.808 { 00:28:15.808 "adrfam": "IPv4", 00:28:15.808 "traddr": "10.0.0.2", 00:28:15.808 "transport": "TCP", 00:28:15.808 "trsvcid": "4420", 00:28:15.808 "trtype": "TCP" 00:28:15.808 } 00:28:15.808 ], 00:28:15.808 "max_cntlid": 65519, 00:28:15.808 "max_namespaces": 32, 00:28:15.808 "min_cntlid": 1, 00:28:15.808 "model_number": "SPDK bdev Controller", 00:28:15.808 "namespaces": [ 00:28:15.808 { 00:28:15.808 "bdev_name": "Null2", 00:28:15.808 "name": "Null2", 00:28:15.808 "nguid": "E21C0BD239BE4394852519F91D4D9175", 00:28:15.808 "nsid": 1, 00:28:15.808 "uuid": "e21c0bd2-39be-4394-8525-19f91d4d9175" 00:28:15.808 } 00:28:15.808 ], 00:28:15.808 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:28:15.808 "serial_number": "SPDK00000000000002", 00:28:15.808 "subtype": "NVMe" 00:28:15.808 }, 00:28:15.808 { 00:28:15.808 "allow_any_host": true, 00:28:15.808 "hosts": [], 00:28:15.808 "listen_addresses": [ 00:28:15.808 { 00:28:15.808 "adrfam": "IPv4", 00:28:15.808 "traddr": "10.0.0.2", 00:28:15.808 "transport": "TCP", 00:28:15.808 "trsvcid": "4420", 00:28:15.808 "trtype": "TCP" 00:28:15.808 } 00:28:15.808 ], 00:28:15.808 "max_cntlid": 65519, 00:28:15.808 "max_namespaces": 32, 00:28:15.808 "min_cntlid": 1, 00:28:15.808 "model_number": "SPDK bdev Controller", 00:28:15.808 "namespaces": [ 00:28:15.808 { 00:28:15.808 "bdev_name": "Null3", 00:28:15.808 "name": "Null3", 00:28:15.808 "nguid": "1F2281A2C5914B8284411EE8FE07CFDC", 00:28:15.808 "nsid": 1, 00:28:15.808 "uuid": "1f2281a2-c591-4b82-8441-1ee8fe07cfdc" 00:28:15.808 } 00:28:15.808 ], 00:28:15.808 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:28:15.808 "serial_number": "SPDK00000000000003", 00:28:15.808 "subtype": "NVMe" 00:28:15.808 }, 00:28:15.808 { 00:28:15.808 "allow_any_host": true, 00:28:15.808 "hosts": [], 00:28:15.808 "listen_addresses": [ 00:28:15.808 { 00:28:15.808 "adrfam": "IPv4", 00:28:15.808 "traddr": "10.0.0.2", 00:28:15.808 "transport": "TCP", 00:28:15.808 "trsvcid": "4420", 00:28:15.808 "trtype": "TCP" 00:28:15.808 } 00:28:15.808 ], 00:28:15.808 "max_cntlid": 65519, 00:28:15.808 "max_namespaces": 32, 00:28:15.808 "min_cntlid": 1, 00:28:15.808 "model_number": "SPDK bdev Controller", 00:28:15.808 "namespaces": [ 00:28:15.808 { 00:28:15.808 "bdev_name": "Null4", 00:28:15.808 "name": "Null4", 00:28:15.808 "nguid": "E3D6D7C4C4BD4BE1AF8195511DA01D85", 00:28:15.808 "nsid": 1, 00:28:15.808 "uuid": "e3d6d7c4-c4bd-4be1-af81-95511da01d85" 00:28:15.808 } 00:28:15.808 ], 00:28:15.808 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:28:15.808 "serial_number": "SPDK00000000000004", 00:28:15.808 "subtype": "NVMe" 00:28:15.808 } 00:28:15.808 ] 00:28:15.808 08:24:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.808 08:24:48 -- target/discovery.sh@42 -- # seq 1 4 00:28:15.808 08:24:48 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:28:15.808 08:24:48 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:15.808 08:24:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.808 08:24:48 -- common/autotest_common.sh@10 -- # set +x 00:28:15.808 08:24:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.808 08:24:48 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:28:15.808 08:24:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.808 08:24:48 -- common/autotest_common.sh@10 -- # set +x 00:28:15.808 08:24:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.808 08:24:48 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:28:15.808 08:24:48 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:15.808 08:24:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.808 08:24:48 -- common/autotest_common.sh@10 -- # set +x 00:28:15.808 08:24:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.808 08:24:48 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:28:15.808 08:24:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.808 08:24:48 -- common/autotest_common.sh@10 -- # set +x 00:28:15.808 08:24:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.808 08:24:49 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:28:15.808 08:24:49 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:28:15.808 08:24:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.808 08:24:49 -- common/autotest_common.sh@10 -- # set +x 00:28:15.808 08:24:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.808 08:24:49 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:28:15.808 08:24:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.808 08:24:49 -- common/autotest_common.sh@10 -- # set +x 00:28:15.808 08:24:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.808 08:24:49 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:28:15.808 08:24:49 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:28:15.808 08:24:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.808 08:24:49 -- common/autotest_common.sh@10 -- # set +x 00:28:15.808 08:24:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.808 08:24:49 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:28:15.808 08:24:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.808 08:24:49 -- common/autotest_common.sh@10 -- # set +x 00:28:15.808 08:24:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.808 08:24:49 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:28:15.808 08:24:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.808 08:24:49 -- common/autotest_common.sh@10 -- # set +x 00:28:15.808 08:24:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.808 08:24:49 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:28:15.808 08:24:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.808 08:24:49 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:28:15.808 08:24:49 -- common/autotest_common.sh@10 -- # set +x 00:28:15.808 08:24:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.808 08:24:49 -- target/discovery.sh@49 -- # check_bdevs= 00:28:15.808 08:24:49 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:28:15.808 08:24:49 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:28:15.808 08:24:49 -- target/discovery.sh@57 -- # nvmftestfini 00:28:15.808 08:24:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:15.808 08:24:49 -- nvmf/common.sh@116 -- # sync 00:28:16.067 08:24:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:16.067 08:24:49 -- nvmf/common.sh@119 -- # set +e 00:28:16.067 08:24:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:16.067 08:24:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:16.067 rmmod nvme_tcp 00:28:16.067 rmmod nvme_fabrics 00:28:16.067 rmmod nvme_keyring 00:28:16.067 08:24:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:16.067 08:24:49 -- nvmf/common.sh@123 -- # set -e 00:28:16.067 08:24:49 -- nvmf/common.sh@124 -- # return 0 00:28:16.067 08:24:49 -- nvmf/common.sh@477 -- # '[' -n 61364 ']' 00:28:16.067 08:24:49 -- nvmf/common.sh@478 -- # killprocess 61364 00:28:16.067 08:24:49 -- common/autotest_common.sh@926 -- # '[' -z 61364 ']' 00:28:16.067 08:24:49 -- common/autotest_common.sh@930 -- # kill -0 61364 00:28:16.067 08:24:49 -- common/autotest_common.sh@931 -- # uname 00:28:16.067 08:24:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:16.067 08:24:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61364 00:28:16.067 08:24:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:16.067 08:24:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:16.067 killing process with pid 61364 00:28:16.067 08:24:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61364' 00:28:16.067 08:24:49 -- common/autotest_common.sh@945 -- # kill 61364 00:28:16.067 [2024-04-17 08:24:49.232397] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:28:16.067 08:24:49 -- common/autotest_common.sh@950 -- # wait 61364 00:28:16.325 08:24:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:16.325 08:24:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:16.325 08:24:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:16.325 08:24:49 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:16.325 08:24:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:16.325 08:24:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.325 08:24:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:16.325 08:24:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.325 08:24:49 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:28:16.585 00:28:16.586 real 0m2.559s 00:28:16.586 user 0m6.658s 00:28:16.586 sys 0m0.702s 00:28:16.586 08:24:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:16.586 08:24:49 -- common/autotest_common.sh@10 -- # set +x 00:28:16.586 ************************************ 00:28:16.586 END TEST nvmf_discovery 00:28:16.586 ************************************ 00:28:16.586 08:24:49 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:28:16.586 08:24:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:16.586 08:24:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:16.586 08:24:49 -- common/autotest_common.sh@10 -- # set +x 00:28:16.586 ************************************ 00:28:16.586 START TEST nvmf_referrals 00:28:16.586 ************************************ 00:28:16.586 08:24:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:28:16.586 * Looking for test storage... 00:28:16.586 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:16.586 08:24:49 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:16.586 08:24:49 -- nvmf/common.sh@7 -- # uname -s 00:28:16.586 08:24:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:16.586 08:24:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:16.586 08:24:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:16.586 08:24:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:16.586 08:24:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:16.586 08:24:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:16.586 08:24:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:16.586 08:24:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:16.586 08:24:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:16.586 08:24:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:16.586 08:24:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:28:16.586 08:24:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:28:16.586 08:24:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:16.586 08:24:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:16.586 08:24:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:16.586 08:24:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:16.586 08:24:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:16.586 08:24:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:16.586 08:24:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:16.586 08:24:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.586 08:24:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.586 08:24:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.586 08:24:49 -- paths/export.sh@5 -- # export PATH 00:28:16.586 08:24:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.586 08:24:49 -- nvmf/common.sh@46 -- # : 0 00:28:16.586 08:24:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:16.586 08:24:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:16.586 08:24:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:16.586 08:24:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:16.586 08:24:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:16.586 08:24:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:16.586 08:24:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:16.586 08:24:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:16.586 08:24:49 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:28:16.586 08:24:49 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:28:16.586 08:24:49 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:28:16.586 08:24:49 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:28:16.586 08:24:49 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:28:16.586 08:24:49 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:28:16.586 08:24:49 -- target/referrals.sh@37 -- # nvmftestinit 00:28:16.586 08:24:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:16.586 08:24:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:16.586 08:24:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:16.586 08:24:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:16.586 08:24:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:16.586 08:24:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.586 08:24:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:16.586 08:24:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.586 08:24:49 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:28:16.586 08:24:49 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:28:16.586 08:24:49 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:28:16.586 08:24:49 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:28:16.586 08:24:49 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:28:16.586 08:24:49 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:28:16.586 08:24:49 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:16.586 08:24:49 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:16.586 08:24:49 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:16.586 08:24:49 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:28:16.586 08:24:49 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:16.586 08:24:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:16.586 08:24:49 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:16.586 08:24:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:16.586 08:24:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:16.586 08:24:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:16.586 08:24:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:16.586 08:24:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:16.586 08:24:49 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:28:16.845 08:24:49 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:28:16.845 Cannot find device "nvmf_tgt_br" 00:28:16.845 08:24:49 -- nvmf/common.sh@154 -- # true 00:28:16.845 08:24:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:28:16.845 Cannot find device "nvmf_tgt_br2" 00:28:16.845 08:24:49 -- nvmf/common.sh@155 -- # true 00:28:16.845 08:24:49 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:28:16.845 08:24:49 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:28:16.845 Cannot find device "nvmf_tgt_br" 00:28:16.845 08:24:49 -- nvmf/common.sh@157 -- # true 00:28:16.845 08:24:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:28:16.845 Cannot find device "nvmf_tgt_br2" 00:28:16.845 08:24:49 -- nvmf/common.sh@158 -- # true 00:28:16.845 08:24:49 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:28:16.845 08:24:50 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:28:16.845 08:24:50 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:16.845 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:16.845 08:24:50 -- nvmf/common.sh@161 -- # true 00:28:16.845 08:24:50 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:16.845 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:16.845 08:24:50 -- nvmf/common.sh@162 -- # true 00:28:16.845 08:24:50 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:28:16.845 08:24:50 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:16.845 08:24:50 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:16.845 08:24:50 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:16.845 08:24:50 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:16.845 08:24:50 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:16.845 08:24:50 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:17.105 08:24:50 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:17.105 08:24:50 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:17.105 08:24:50 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:28:17.105 08:24:50 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:28:17.105 08:24:50 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:28:17.105 08:24:50 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:28:17.105 08:24:50 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:17.105 08:24:50 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:17.105 08:24:50 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:17.105 08:24:50 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:28:17.105 08:24:50 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:28:17.105 08:24:50 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:28:17.105 08:24:50 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:17.105 08:24:50 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:17.105 08:24:50 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:17.105 08:24:50 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:17.105 08:24:50 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:28:17.105 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:17.105 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:28:17.105 00:28:17.105 --- 10.0.0.2 ping statistics --- 00:28:17.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.105 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:28:17.105 08:24:50 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:28:17.105 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:17.105 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:28:17.105 00:28:17.105 --- 10.0.0.3 ping statistics --- 00:28:17.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.105 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:28:17.105 08:24:50 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:17.105 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:17.105 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:28:17.105 00:28:17.105 --- 10.0.0.1 ping statistics --- 00:28:17.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.105 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:28:17.105 08:24:50 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:17.105 08:24:50 -- nvmf/common.sh@421 -- # return 0 00:28:17.105 08:24:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:17.105 08:24:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:17.105 08:24:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:17.105 08:24:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:17.105 08:24:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:17.105 08:24:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:17.105 08:24:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:17.105 08:24:50 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:28:17.105 08:24:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:17.105 08:24:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:17.105 08:24:50 -- common/autotest_common.sh@10 -- # set +x 00:28:17.105 08:24:50 -- nvmf/common.sh@469 -- # nvmfpid=61602 00:28:17.105 08:24:50 -- nvmf/common.sh@470 -- # waitforlisten 61602 00:28:17.105 08:24:50 -- common/autotest_common.sh@819 -- # '[' -z 61602 ']' 00:28:17.105 08:24:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:17.105 08:24:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:17.105 08:24:50 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:17.105 08:24:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:17.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:17.105 08:24:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:17.105 08:24:50 -- common/autotest_common.sh@10 -- # set +x 00:28:17.364 [2024-04-17 08:24:50.447826] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:17.364 [2024-04-17 08:24:50.447906] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:17.364 [2024-04-17 08:24:50.594757] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:17.624 [2024-04-17 08:24:50.716073] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:17.624 [2024-04-17 08:24:50.716248] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:17.624 [2024-04-17 08:24:50.716271] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:17.624 [2024-04-17 08:24:50.716282] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:17.624 [2024-04-17 08:24:50.716631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.624 [2024-04-17 08:24:50.716679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:17.624 [2024-04-17 08:24:50.716724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:17.624 [2024-04-17 08:24:50.716728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:18.194 08:24:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:18.194 08:24:51 -- common/autotest_common.sh@852 -- # return 0 00:28:18.194 08:24:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:18.194 08:24:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:18.194 08:24:51 -- common/autotest_common.sh@10 -- # set +x 00:28:18.194 08:24:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:18.194 08:24:51 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:18.194 08:24:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:18.194 08:24:51 -- common/autotest_common.sh@10 -- # set +x 00:28:18.194 [2024-04-17 08:24:51.405175] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:18.194 08:24:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:18.194 08:24:51 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:28:18.194 08:24:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:18.194 08:24:51 -- common/autotest_common.sh@10 -- # set +x 00:28:18.194 [2024-04-17 08:24:51.433848] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:18.194 08:24:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:18.194 08:24:51 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:28:18.194 08:24:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:18.194 08:24:51 -- common/autotest_common.sh@10 -- # set +x 00:28:18.194 08:24:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:18.194 08:24:51 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:28:18.194 08:24:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:18.194 08:24:51 -- common/autotest_common.sh@10 -- # set +x 00:28:18.194 08:24:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:18.194 08:24:51 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:28:18.194 08:24:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:18.194 08:24:51 -- common/autotest_common.sh@10 -- # set +x 00:28:18.194 08:24:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:18.194 08:24:51 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:28:18.194 08:24:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:18.194 08:24:51 -- target/referrals.sh@48 -- # jq length 00:28:18.194 08:24:51 -- common/autotest_common.sh@10 -- # set +x 00:28:18.194 08:24:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:18.453 08:24:51 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:28:18.453 08:24:51 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:28:18.453 08:24:51 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:28:18.453 08:24:51 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:28:18.453 08:24:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:18.453 08:24:51 -- common/autotest_common.sh@10 -- # set +x 00:28:18.453 08:24:51 -- target/referrals.sh@21 -- # sort 00:28:18.453 08:24:51 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:28:18.453 08:24:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:18.453 08:24:51 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:28:18.453 08:24:51 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:28:18.454 08:24:51 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:28:18.454 08:24:51 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:28:18.454 08:24:51 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:28:18.454 08:24:51 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:28:18.454 08:24:51 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -a 10.0.0.2 -s 8009 -o json 00:28:18.454 08:24:51 -- target/referrals.sh@26 -- # sort 00:28:18.454 08:24:51 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:28:18.454 08:24:51 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:28:18.454 08:24:51 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:28:18.454 08:24:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:18.454 08:24:51 -- common/autotest_common.sh@10 -- # set +x 00:28:18.454 08:24:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:18.454 08:24:51 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:28:18.454 08:24:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:18.454 08:24:51 -- common/autotest_common.sh@10 -- # set +x 00:28:18.454 08:24:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:18.454 08:24:51 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:28:18.454 08:24:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:18.454 08:24:51 -- common/autotest_common.sh@10 -- # set +x 00:28:18.454 08:24:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:18.454 08:24:51 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:28:18.454 08:24:51 -- target/referrals.sh@56 -- # jq length 00:28:18.454 08:24:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:18.454 08:24:51 -- common/autotest_common.sh@10 -- # set +x 00:28:18.454 08:24:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:18.713 08:24:51 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:28:18.713 08:24:51 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:28:18.713 08:24:51 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:28:18.713 08:24:51 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:28:18.713 08:24:51 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -a 10.0.0.2 -s 8009 -o json 00:28:18.713 08:24:51 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:28:18.713 08:24:51 -- target/referrals.sh@26 -- # sort 00:28:18.713 08:24:51 -- target/referrals.sh@26 -- # echo 00:28:18.713 08:24:51 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:28:18.713 08:24:51 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:28:18.713 08:24:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:18.713 08:24:51 -- common/autotest_common.sh@10 -- # set +x 00:28:18.713 08:24:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:18.713 08:24:51 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:28:18.713 08:24:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:18.713 08:24:51 -- common/autotest_common.sh@10 -- # set +x 00:28:18.713 08:24:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:18.713 08:24:51 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:28:18.713 08:24:51 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:28:18.713 08:24:51 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:28:18.713 08:24:51 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:28:18.713 08:24:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:18.713 08:24:51 -- common/autotest_common.sh@10 -- # set +x 00:28:18.713 08:24:51 -- target/referrals.sh@21 -- # sort 00:28:18.713 08:24:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:18.713 08:24:51 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:28:18.713 08:24:51 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:28:18.713 08:24:51 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:28:18.713 08:24:51 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:28:18.713 08:24:51 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:28:18.713 08:24:51 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -a 10.0.0.2 -s 8009 -o json 00:28:18.713 08:24:51 -- target/referrals.sh@26 -- # sort 00:28:18.713 08:24:51 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:28:18.713 08:24:52 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:28:18.713 08:24:52 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:28:18.713 08:24:52 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:28:18.713 08:24:52 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:28:18.713 08:24:52 -- target/referrals.sh@67 -- # jq -r .subnqn 00:28:18.713 08:24:52 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -a 10.0.0.2 -s 8009 -o json 00:28:18.713 08:24:52 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:28:18.971 08:24:52 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:28:18.971 08:24:52 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:28:18.971 08:24:52 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:28:18.971 08:24:52 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -a 10.0.0.2 -s 8009 -o json 00:28:18.971 08:24:52 -- target/referrals.sh@68 -- # jq -r .subnqn 00:28:18.971 08:24:52 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:28:18.971 08:24:52 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:28:18.971 08:24:52 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:28:18.971 08:24:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:18.971 08:24:52 -- common/autotest_common.sh@10 -- # set +x 00:28:18.971 08:24:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:18.971 08:24:52 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:28:18.971 08:24:52 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:28:18.971 08:24:52 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:28:18.971 08:24:52 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:28:18.971 08:24:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:18.971 08:24:52 -- common/autotest_common.sh@10 -- # set +x 00:28:18.971 08:24:52 -- target/referrals.sh@21 -- # sort 00:28:18.971 08:24:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:18.971 08:24:52 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:28:18.971 08:24:52 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:28:18.971 08:24:52 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:28:18.971 08:24:52 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:28:18.971 08:24:52 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:28:18.971 08:24:52 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:28:18.971 08:24:52 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -a 10.0.0.2 -s 8009 -o json 00:28:18.971 08:24:52 -- target/referrals.sh@26 -- # sort 00:28:18.971 08:24:52 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:28:18.971 08:24:52 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:28:19.230 08:24:52 -- target/referrals.sh@75 -- # jq -r .subnqn 00:28:19.230 08:24:52 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:28:19.230 08:24:52 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:28:19.230 08:24:52 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -a 10.0.0.2 -s 8009 -o json 00:28:19.230 08:24:52 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:28:19.230 08:24:52 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:28:19.230 08:24:52 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:28:19.230 08:24:52 -- target/referrals.sh@76 -- # jq -r .subnqn 00:28:19.230 08:24:52 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:28:19.230 08:24:52 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -a 10.0.0.2 -s 8009 -o json 00:28:19.230 08:24:52 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:28:19.230 08:24:52 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:28:19.230 08:24:52 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:28:19.230 08:24:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:19.230 08:24:52 -- common/autotest_common.sh@10 -- # set +x 00:28:19.230 08:24:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:19.230 08:24:52 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:28:19.230 08:24:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:19.230 08:24:52 -- common/autotest_common.sh@10 -- # set +x 00:28:19.230 08:24:52 -- target/referrals.sh@82 -- # jq length 00:28:19.230 08:24:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:19.230 08:24:52 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:28:19.230 08:24:52 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:28:19.230 08:24:52 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:28:19.230 08:24:52 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:28:19.230 08:24:52 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -a 10.0.0.2 -s 8009 -o json 00:28:19.230 08:24:52 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:28:19.230 08:24:52 -- target/referrals.sh@26 -- # sort 00:28:19.489 08:24:52 -- target/referrals.sh@26 -- # echo 00:28:19.489 08:24:52 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:28:19.489 08:24:52 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:28:19.489 08:24:52 -- target/referrals.sh@86 -- # nvmftestfini 00:28:19.489 08:24:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:19.489 08:24:52 -- nvmf/common.sh@116 -- # sync 00:28:19.489 08:24:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:19.489 08:24:52 -- nvmf/common.sh@119 -- # set +e 00:28:19.489 08:24:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:19.489 08:24:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:19.489 rmmod nvme_tcp 00:28:19.489 rmmod nvme_fabrics 00:28:19.489 rmmod nvme_keyring 00:28:19.489 08:24:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:19.489 08:24:52 -- nvmf/common.sh@123 -- # set -e 00:28:19.489 08:24:52 -- nvmf/common.sh@124 -- # return 0 00:28:19.489 08:24:52 -- nvmf/common.sh@477 -- # '[' -n 61602 ']' 00:28:19.489 08:24:52 -- nvmf/common.sh@478 -- # killprocess 61602 00:28:19.489 08:24:52 -- common/autotest_common.sh@926 -- # '[' -z 61602 ']' 00:28:19.489 08:24:52 -- common/autotest_common.sh@930 -- # kill -0 61602 00:28:19.489 08:24:52 -- common/autotest_common.sh@931 -- # uname 00:28:19.489 08:24:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:19.489 08:24:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61602 00:28:19.489 killing process with pid 61602 00:28:19.489 08:24:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:19.489 08:24:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:19.489 08:24:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61602' 00:28:19.489 08:24:52 -- common/autotest_common.sh@945 -- # kill 61602 00:28:19.489 08:24:52 -- common/autotest_common.sh@950 -- # wait 61602 00:28:19.748 08:24:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:19.748 08:24:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:19.748 08:24:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:19.748 08:24:53 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:19.748 08:24:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:19.748 08:24:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.748 08:24:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:19.748 08:24:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.748 08:24:53 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:28:19.748 00:28:19.748 real 0m3.320s 00:28:19.748 user 0m10.074s 00:28:19.748 sys 0m1.032s 00:28:19.748 08:24:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:19.748 08:24:53 -- common/autotest_common.sh@10 -- # set +x 00:28:19.748 ************************************ 00:28:19.748 END TEST nvmf_referrals 00:28:19.748 ************************************ 00:28:20.006 08:24:53 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:28:20.006 08:24:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:20.006 08:24:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:20.006 08:24:53 -- common/autotest_common.sh@10 -- # set +x 00:28:20.006 ************************************ 00:28:20.006 START TEST nvmf_connect_disconnect 00:28:20.006 ************************************ 00:28:20.006 08:24:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:28:20.006 * Looking for test storage... 00:28:20.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:20.006 08:24:53 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:20.006 08:24:53 -- nvmf/common.sh@7 -- # uname -s 00:28:20.006 08:24:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:20.006 08:24:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:20.006 08:24:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:20.006 08:24:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:20.006 08:24:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:20.006 08:24:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:20.006 08:24:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:20.006 08:24:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:20.006 08:24:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:20.006 08:24:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:20.006 08:24:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:28:20.006 08:24:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:28:20.006 08:24:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:20.006 08:24:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:20.006 08:24:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:20.006 08:24:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:20.006 08:24:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:20.006 08:24:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:20.006 08:24:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:20.006 08:24:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.006 08:24:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.006 08:24:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.006 08:24:53 -- paths/export.sh@5 -- # export PATH 00:28:20.006 08:24:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.006 08:24:53 -- nvmf/common.sh@46 -- # : 0 00:28:20.006 08:24:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:20.006 08:24:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:20.006 08:24:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:20.006 08:24:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:20.006 08:24:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:20.006 08:24:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:20.006 08:24:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:20.006 08:24:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:20.006 08:24:53 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:20.006 08:24:53 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:20.006 08:24:53 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:28:20.006 08:24:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:20.006 08:24:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:20.006 08:24:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:20.006 08:24:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:20.006 08:24:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:20.006 08:24:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.006 08:24:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:20.006 08:24:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:20.006 08:24:53 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:28:20.006 08:24:53 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:28:20.006 08:24:53 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:28:20.006 08:24:53 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:28:20.006 08:24:53 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:28:20.006 08:24:53 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:28:20.006 08:24:53 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:20.006 08:24:53 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:20.006 08:24:53 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:20.006 08:24:53 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:28:20.006 08:24:53 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:20.006 08:24:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:20.006 08:24:53 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:20.006 08:24:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:20.006 08:24:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:20.006 08:24:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:20.006 08:24:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:20.006 08:24:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:20.006 08:24:53 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:28:20.006 08:24:53 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:28:20.263 Cannot find device "nvmf_tgt_br" 00:28:20.263 08:24:53 -- nvmf/common.sh@154 -- # true 00:28:20.263 08:24:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:28:20.263 Cannot find device "nvmf_tgt_br2" 00:28:20.263 08:24:53 -- nvmf/common.sh@155 -- # true 00:28:20.263 08:24:53 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:28:20.263 08:24:53 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:28:20.263 Cannot find device "nvmf_tgt_br" 00:28:20.263 08:24:53 -- nvmf/common.sh@157 -- # true 00:28:20.263 08:24:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:28:20.263 Cannot find device "nvmf_tgt_br2" 00:28:20.263 08:24:53 -- nvmf/common.sh@158 -- # true 00:28:20.263 08:24:53 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:28:20.264 08:24:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:28:20.264 08:24:53 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:20.264 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:20.264 08:24:53 -- nvmf/common.sh@161 -- # true 00:28:20.264 08:24:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:20.264 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:20.264 08:24:53 -- nvmf/common.sh@162 -- # true 00:28:20.264 08:24:53 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:28:20.264 08:24:53 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:20.264 08:24:53 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:20.264 08:24:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:20.264 08:24:53 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:20.264 08:24:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:20.264 08:24:53 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:20.264 08:24:53 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:20.264 08:24:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:20.264 08:24:53 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:28:20.264 08:24:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:28:20.264 08:24:53 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:28:20.264 08:24:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:28:20.264 08:24:53 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:20.522 08:24:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:20.522 08:24:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:20.522 08:24:53 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:28:20.522 08:24:53 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:28:20.522 08:24:53 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:28:20.522 08:24:53 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:20.522 08:24:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:20.522 08:24:53 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:20.522 08:24:53 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:20.522 08:24:53 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:28:20.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:20.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:28:20.522 00:28:20.522 --- 10.0.0.2 ping statistics --- 00:28:20.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.522 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:28:20.522 08:24:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:28:20.522 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:20.522 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:28:20.522 00:28:20.522 --- 10.0.0.3 ping statistics --- 00:28:20.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.522 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:28:20.522 08:24:53 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:20.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:20.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:28:20.522 00:28:20.522 --- 10.0.0.1 ping statistics --- 00:28:20.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.522 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:28:20.522 08:24:53 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:20.522 08:24:53 -- nvmf/common.sh@421 -- # return 0 00:28:20.522 08:24:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:20.522 08:24:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:20.522 08:24:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:20.522 08:24:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:20.522 08:24:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:20.522 08:24:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:20.522 08:24:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:20.522 08:24:53 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:28:20.522 08:24:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:20.522 08:24:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:20.522 08:24:53 -- common/autotest_common.sh@10 -- # set +x 00:28:20.522 08:24:53 -- nvmf/common.sh@469 -- # nvmfpid=61903 00:28:20.522 08:24:53 -- nvmf/common.sh@470 -- # waitforlisten 61903 00:28:20.522 08:24:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:20.522 08:24:53 -- common/autotest_common.sh@819 -- # '[' -z 61903 ']' 00:28:20.522 08:24:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:20.522 08:24:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:20.522 08:24:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:20.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:20.522 08:24:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:20.522 08:24:53 -- common/autotest_common.sh@10 -- # set +x 00:28:20.522 [2024-04-17 08:24:53.759673] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:20.522 [2024-04-17 08:24:53.759852] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:20.781 [2024-04-17 08:24:53.904262] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:20.781 [2024-04-17 08:24:54.011494] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:20.781 [2024-04-17 08:24:54.011737] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:20.781 [2024-04-17 08:24:54.011772] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:20.781 [2024-04-17 08:24:54.011810] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:20.781 [2024-04-17 08:24:54.012066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.781 [2024-04-17 08:24:54.012747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:20.781 [2024-04-17 08:24:54.012858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.781 [2024-04-17 08:24:54.012859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:21.349 08:24:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:21.607 08:24:54 -- common/autotest_common.sh@852 -- # return 0 00:28:21.608 08:24:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:21.608 08:24:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:21.608 08:24:54 -- common/autotest_common.sh@10 -- # set +x 00:28:21.608 08:24:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:21.608 08:24:54 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:28:21.608 08:24:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:21.608 08:24:54 -- common/autotest_common.sh@10 -- # set +x 00:28:21.608 [2024-04-17 08:24:54.743147] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:21.608 08:24:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:21.608 08:24:54 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:28:21.608 08:24:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:21.608 08:24:54 -- common/autotest_common.sh@10 -- # set +x 00:28:21.608 08:24:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:21.608 08:24:54 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:28:21.608 08:24:54 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:28:21.608 08:24:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:21.608 08:24:54 -- common/autotest_common.sh@10 -- # set +x 00:28:21.608 08:24:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:21.608 08:24:54 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:21.608 08:24:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:21.608 08:24:54 -- common/autotest_common.sh@10 -- # set +x 00:28:21.608 08:24:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:21.608 08:24:54 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:21.608 08:24:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:21.608 08:24:54 -- common/autotest_common.sh@10 -- # set +x 00:28:21.608 [2024-04-17 08:24:54.827432] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:21.608 08:24:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:21.608 08:24:54 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:28:21.608 08:24:54 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:28:21.608 08:24:54 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:28:21.608 08:24:54 -- target/connect_disconnect.sh@34 -- # set +x 00:28:24.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:26.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:28.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:30.618 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:33.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:35.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:37.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:39.649 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:42.296 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:44.201 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:46.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:48.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:51.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:53.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:55.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:57.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:00.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:01.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:04.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:06.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:09.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:10.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:13.465 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:15.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:17.940 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:19.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:22.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:24.296 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:26.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:28.778 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:30.677 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:33.209 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:35.115 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:37.659 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:39.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:42.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:44.018 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:46.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:48.467 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:51.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:52.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:55.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:57.966 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:59.864 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:02.392 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:04.288 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:06.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:08.722 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:11.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:13.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:15.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:17.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:20.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:22.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:24.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:26.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:29.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:30.916 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:33.452 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:35.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:37.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:39.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:42.372 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:44.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:46.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:48.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:51.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:53.236 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:55.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:57.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:00.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:02.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:04.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:07.208 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:09.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:11.639 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:13.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:16.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:18.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:20.541 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:22.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:24.998 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:26.897 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:29.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:31.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:33.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:35.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:38.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:40.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:42.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:44.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:47.189 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:49.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:51.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:54.203 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:56.110 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:58.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:00.582 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:03.346 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:05.246 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:05.246 08:28:38 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:32:05.246 08:28:38 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:32:05.246 08:28:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:05.246 08:28:38 -- nvmf/common.sh@116 -- # sync 00:32:05.246 08:28:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:05.246 08:28:38 -- nvmf/common.sh@119 -- # set +e 00:32:05.246 08:28:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:05.246 08:28:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:05.246 rmmod nvme_tcp 00:32:05.246 rmmod nvme_fabrics 00:32:05.246 rmmod nvme_keyring 00:32:05.246 08:28:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:05.246 08:28:38 -- nvmf/common.sh@123 -- # set -e 00:32:05.246 08:28:38 -- nvmf/common.sh@124 -- # return 0 00:32:05.246 08:28:38 -- nvmf/common.sh@477 -- # '[' -n 61903 ']' 00:32:05.246 08:28:38 -- nvmf/common.sh@478 -- # killprocess 61903 00:32:05.246 08:28:38 -- common/autotest_common.sh@926 -- # '[' -z 61903 ']' 00:32:05.246 08:28:38 -- common/autotest_common.sh@930 -- # kill -0 61903 00:32:05.246 08:28:38 -- common/autotest_common.sh@931 -- # uname 00:32:05.246 08:28:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:05.246 08:28:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61903 00:32:05.246 killing process with pid 61903 00:32:05.246 08:28:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:05.246 08:28:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:05.246 08:28:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61903' 00:32:05.246 08:28:38 -- common/autotest_common.sh@945 -- # kill 61903 00:32:05.246 08:28:38 -- common/autotest_common.sh@950 -- # wait 61903 00:32:05.505 08:28:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:05.505 08:28:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:05.505 08:28:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:05.505 08:28:38 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:05.505 08:28:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:05.505 08:28:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:05.505 08:28:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:05.505 08:28:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:05.505 08:28:38 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:32:05.505 ************************************ 00:32:05.505 END TEST nvmf_connect_disconnect 00:32:05.505 ************************************ 00:32:05.505 00:32:05.505 real 3m45.589s 00:32:05.505 user 14m48.854s 00:32:05.505 sys 0m14.170s 00:32:05.505 08:28:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:05.505 08:28:38 -- common/autotest_common.sh@10 -- # set +x 00:32:05.505 08:28:38 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:32:05.505 08:28:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:05.505 08:28:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:05.505 08:28:38 -- common/autotest_common.sh@10 -- # set +x 00:32:05.505 ************************************ 00:32:05.505 START TEST nvmf_multitarget 00:32:05.505 ************************************ 00:32:05.505 08:28:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:32:05.764 * Looking for test storage... 00:32:05.764 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:32:05.764 08:28:38 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:05.764 08:28:38 -- nvmf/common.sh@7 -- # uname -s 00:32:05.764 08:28:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:05.764 08:28:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:05.764 08:28:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:05.764 08:28:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:05.764 08:28:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:05.764 08:28:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:05.764 08:28:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:05.764 08:28:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:05.764 08:28:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:05.764 08:28:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:05.764 08:28:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:32:05.764 08:28:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:32:05.764 08:28:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:05.764 08:28:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:05.764 08:28:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:05.764 08:28:38 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:05.764 08:28:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:05.764 08:28:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:05.764 08:28:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:05.764 08:28:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.764 08:28:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.764 08:28:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.764 08:28:38 -- paths/export.sh@5 -- # export PATH 00:32:05.764 08:28:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.764 08:28:38 -- nvmf/common.sh@46 -- # : 0 00:32:05.764 08:28:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:05.764 08:28:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:05.764 08:28:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:05.764 08:28:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:05.764 08:28:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:05.764 08:28:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:05.764 08:28:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:05.764 08:28:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:05.764 08:28:38 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:32:05.764 08:28:38 -- target/multitarget.sh@15 -- # nvmftestinit 00:32:05.764 08:28:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:05.765 08:28:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:05.765 08:28:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:05.765 08:28:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:05.765 08:28:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:05.765 08:28:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:05.765 08:28:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:05.765 08:28:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:05.765 08:28:38 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:32:05.765 08:28:38 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:32:05.765 08:28:38 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:32:05.765 08:28:38 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:32:05.765 08:28:38 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:32:05.765 08:28:38 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:32:05.765 08:28:38 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:05.765 08:28:38 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:05.765 08:28:38 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:32:05.765 08:28:38 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:32:05.765 08:28:38 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:05.765 08:28:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:05.765 08:28:38 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:05.765 08:28:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:05.765 08:28:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:05.765 08:28:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:05.765 08:28:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:05.765 08:28:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:05.765 08:28:38 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:32:05.765 08:28:38 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:32:05.765 Cannot find device "nvmf_tgt_br" 00:32:05.765 08:28:38 -- nvmf/common.sh@154 -- # true 00:32:05.765 08:28:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:32:05.765 Cannot find device "nvmf_tgt_br2" 00:32:05.765 08:28:38 -- nvmf/common.sh@155 -- # true 00:32:05.765 08:28:38 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:32:05.765 08:28:38 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:32:05.765 Cannot find device "nvmf_tgt_br" 00:32:05.765 08:28:38 -- nvmf/common.sh@157 -- # true 00:32:05.765 08:28:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:32:05.765 Cannot find device "nvmf_tgt_br2" 00:32:05.765 08:28:39 -- nvmf/common.sh@158 -- # true 00:32:05.765 08:28:39 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:32:05.765 08:28:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:32:05.765 08:28:39 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:05.765 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:05.765 08:28:39 -- nvmf/common.sh@161 -- # true 00:32:05.765 08:28:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:05.765 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:05.765 08:28:39 -- nvmf/common.sh@162 -- # true 00:32:05.765 08:28:39 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:32:06.024 08:28:39 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:06.024 08:28:39 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:06.024 08:28:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:06.024 08:28:39 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:06.024 08:28:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:06.024 08:28:39 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:06.024 08:28:39 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:32:06.024 08:28:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:32:06.024 08:28:39 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:32:06.024 08:28:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:32:06.024 08:28:39 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:32:06.024 08:28:39 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:32:06.024 08:28:39 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:06.024 08:28:39 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:06.024 08:28:39 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:06.024 08:28:39 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:32:06.024 08:28:39 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:32:06.024 08:28:39 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:32:06.024 08:28:39 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:06.024 08:28:39 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:06.024 08:28:39 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:06.024 08:28:39 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:06.024 08:28:39 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:32:06.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:06.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:32:06.024 00:32:06.024 --- 10.0.0.2 ping statistics --- 00:32:06.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:06.024 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:32:06.024 08:28:39 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:32:06.024 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:06.024 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:32:06.024 00:32:06.024 --- 10.0.0.3 ping statistics --- 00:32:06.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:06.024 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:32:06.024 08:28:39 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:06.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:06.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:32:06.024 00:32:06.024 --- 10.0.0.1 ping statistics --- 00:32:06.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:06.024 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:32:06.024 08:28:39 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:06.024 08:28:39 -- nvmf/common.sh@421 -- # return 0 00:32:06.024 08:28:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:06.024 08:28:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:06.024 08:28:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:06.024 08:28:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:06.024 08:28:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:06.024 08:28:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:06.024 08:28:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:06.024 08:28:39 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:32:06.024 08:28:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:06.024 08:28:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:06.024 08:28:39 -- common/autotest_common.sh@10 -- # set +x 00:32:06.024 08:28:39 -- nvmf/common.sh@469 -- # nvmfpid=65669 00:32:06.024 08:28:39 -- nvmf/common.sh@470 -- # waitforlisten 65669 00:32:06.024 08:28:39 -- common/autotest_common.sh@819 -- # '[' -z 65669 ']' 00:32:06.024 08:28:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:06.024 08:28:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:06.024 08:28:39 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:06.024 08:28:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:06.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:06.024 08:28:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:06.024 08:28:39 -- common/autotest_common.sh@10 -- # set +x 00:32:06.024 [2024-04-17 08:28:39.338546] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:32:06.024 [2024-04-17 08:28:39.338624] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:06.283 [2024-04-17 08:28:39.480975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:06.283 [2024-04-17 08:28:39.584735] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:06.283 [2024-04-17 08:28:39.584879] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:06.283 [2024-04-17 08:28:39.584887] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:06.283 [2024-04-17 08:28:39.584893] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:06.283 [2024-04-17 08:28:39.585096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:06.283 [2024-04-17 08:28:39.585145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:06.283 [2024-04-17 08:28:39.585223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:06.283 [2024-04-17 08:28:39.585227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:07.219 08:28:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:07.219 08:28:40 -- common/autotest_common.sh@852 -- # return 0 00:32:07.219 08:28:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:07.219 08:28:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:07.219 08:28:40 -- common/autotest_common.sh@10 -- # set +x 00:32:07.219 08:28:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:07.219 08:28:40 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:32:07.219 08:28:40 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:32:07.219 08:28:40 -- target/multitarget.sh@21 -- # jq length 00:32:07.219 08:28:40 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:32:07.219 08:28:40 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:32:07.219 "nvmf_tgt_1" 00:32:07.219 08:28:40 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:32:07.477 "nvmf_tgt_2" 00:32:07.477 08:28:40 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:32:07.477 08:28:40 -- target/multitarget.sh@28 -- # jq length 00:32:07.477 08:28:40 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:32:07.477 08:28:40 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:32:07.735 true 00:32:07.736 08:28:40 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:32:07.736 true 00:32:07.736 08:28:40 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:32:07.736 08:28:40 -- target/multitarget.sh@35 -- # jq length 00:32:07.736 08:28:41 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:32:07.736 08:28:41 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:32:07.736 08:28:41 -- target/multitarget.sh@41 -- # nvmftestfini 00:32:07.736 08:28:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:07.736 08:28:41 -- nvmf/common.sh@116 -- # sync 00:32:07.994 08:28:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:07.994 08:28:41 -- nvmf/common.sh@119 -- # set +e 00:32:07.995 08:28:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:07.995 08:28:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:07.995 rmmod nvme_tcp 00:32:07.995 rmmod nvme_fabrics 00:32:07.995 rmmod nvme_keyring 00:32:07.995 08:28:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:07.995 08:28:41 -- nvmf/common.sh@123 -- # set -e 00:32:07.995 08:28:41 -- nvmf/common.sh@124 -- # return 0 00:32:07.995 08:28:41 -- nvmf/common.sh@477 -- # '[' -n 65669 ']' 00:32:07.995 08:28:41 -- nvmf/common.sh@478 -- # killprocess 65669 00:32:07.995 08:28:41 -- common/autotest_common.sh@926 -- # '[' -z 65669 ']' 00:32:07.995 08:28:41 -- common/autotest_common.sh@930 -- # kill -0 65669 00:32:07.995 08:28:41 -- common/autotest_common.sh@931 -- # uname 00:32:07.995 08:28:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:07.995 08:28:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65669 00:32:07.995 08:28:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:07.995 08:28:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:07.995 08:28:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65669' 00:32:07.995 killing process with pid 65669 00:32:07.995 08:28:41 -- common/autotest_common.sh@945 -- # kill 65669 00:32:07.995 08:28:41 -- common/autotest_common.sh@950 -- # wait 65669 00:32:08.253 08:28:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:08.253 08:28:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:08.253 08:28:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:08.253 08:28:41 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:08.253 08:28:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:08.253 08:28:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:08.253 08:28:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:08.253 08:28:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:08.253 08:28:41 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:32:08.253 00:32:08.253 real 0m2.700s 00:32:08.253 user 0m8.208s 00:32:08.253 sys 0m0.725s 00:32:08.253 08:28:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:08.253 08:28:41 -- common/autotest_common.sh@10 -- # set +x 00:32:08.253 ************************************ 00:32:08.253 END TEST nvmf_multitarget 00:32:08.253 ************************************ 00:32:08.253 08:28:41 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:32:08.253 08:28:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:08.253 08:28:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:08.253 08:28:41 -- common/autotest_common.sh@10 -- # set +x 00:32:08.253 ************************************ 00:32:08.253 START TEST nvmf_rpc 00:32:08.253 ************************************ 00:32:08.253 08:28:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:32:08.511 * Looking for test storage... 00:32:08.511 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:32:08.511 08:28:41 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:08.511 08:28:41 -- nvmf/common.sh@7 -- # uname -s 00:32:08.511 08:28:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:08.511 08:28:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:08.511 08:28:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:08.511 08:28:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:08.511 08:28:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:08.511 08:28:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:08.511 08:28:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:08.511 08:28:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:08.511 08:28:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:08.511 08:28:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:08.511 08:28:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:32:08.511 08:28:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:32:08.511 08:28:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:08.511 08:28:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:08.511 08:28:41 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:08.511 08:28:41 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:08.511 08:28:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:08.511 08:28:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:08.511 08:28:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:08.511 08:28:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:08.511 08:28:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:08.511 08:28:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:08.511 08:28:41 -- paths/export.sh@5 -- # export PATH 00:32:08.511 08:28:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:08.511 08:28:41 -- nvmf/common.sh@46 -- # : 0 00:32:08.511 08:28:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:08.511 08:28:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:08.511 08:28:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:08.511 08:28:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:08.511 08:28:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:08.511 08:28:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:08.511 08:28:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:08.511 08:28:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:08.511 08:28:41 -- target/rpc.sh@11 -- # loops=5 00:32:08.512 08:28:41 -- target/rpc.sh@23 -- # nvmftestinit 00:32:08.512 08:28:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:08.512 08:28:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:08.512 08:28:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:08.512 08:28:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:08.512 08:28:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:08.512 08:28:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:08.512 08:28:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:08.512 08:28:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:08.512 08:28:41 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:32:08.512 08:28:41 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:32:08.512 08:28:41 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:32:08.512 08:28:41 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:32:08.512 08:28:41 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:32:08.512 08:28:41 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:32:08.512 08:28:41 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:08.512 08:28:41 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:08.512 08:28:41 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:32:08.512 08:28:41 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:32:08.512 08:28:41 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:08.512 08:28:41 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:08.512 08:28:41 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:08.512 08:28:41 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:08.512 08:28:41 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:08.512 08:28:41 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:08.512 08:28:41 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:08.512 08:28:41 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:08.512 08:28:41 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:32:08.512 08:28:41 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:32:08.512 Cannot find device "nvmf_tgt_br" 00:32:08.512 08:28:41 -- nvmf/common.sh@154 -- # true 00:32:08.512 08:28:41 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:32:08.512 Cannot find device "nvmf_tgt_br2" 00:32:08.512 08:28:41 -- nvmf/common.sh@155 -- # true 00:32:08.512 08:28:41 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:32:08.512 08:28:41 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:32:08.512 Cannot find device "nvmf_tgt_br" 00:32:08.512 08:28:41 -- nvmf/common.sh@157 -- # true 00:32:08.512 08:28:41 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:32:08.512 Cannot find device "nvmf_tgt_br2" 00:32:08.512 08:28:41 -- nvmf/common.sh@158 -- # true 00:32:08.512 08:28:41 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:32:08.512 08:28:41 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:32:08.512 08:28:41 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:08.512 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:08.512 08:28:41 -- nvmf/common.sh@161 -- # true 00:32:08.512 08:28:41 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:08.512 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:08.512 08:28:41 -- nvmf/common.sh@162 -- # true 00:32:08.512 08:28:41 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:32:08.512 08:28:41 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:08.512 08:28:41 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:08.774 08:28:41 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:08.774 08:28:41 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:08.774 08:28:41 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:08.774 08:28:41 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:08.774 08:28:41 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:32:08.774 08:28:41 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:32:08.774 08:28:41 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:32:08.774 08:28:41 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:32:08.774 08:28:41 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:32:08.775 08:28:41 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:32:08.775 08:28:41 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:08.775 08:28:41 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:08.775 08:28:41 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:08.775 08:28:41 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:32:08.775 08:28:41 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:32:08.775 08:28:41 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:32:08.775 08:28:41 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:08.775 08:28:41 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:08.775 08:28:41 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:08.775 08:28:41 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:08.775 08:28:41 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:32:08.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:08.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:32:08.775 00:32:08.775 --- 10.0.0.2 ping statistics --- 00:32:08.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:08.775 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:32:08.775 08:28:41 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:32:08.775 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:08.775 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:32:08.775 00:32:08.775 --- 10.0.0.3 ping statistics --- 00:32:08.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:08.775 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:32:08.775 08:28:41 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:08.775 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:08.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:32:08.775 00:32:08.775 --- 10.0.0.1 ping statistics --- 00:32:08.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:08.775 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:32:08.775 08:28:41 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:08.775 08:28:41 -- nvmf/common.sh@421 -- # return 0 00:32:08.775 08:28:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:08.775 08:28:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:08.775 08:28:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:08.775 08:28:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:08.775 08:28:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:08.775 08:28:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:08.775 08:28:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:08.775 08:28:42 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:32:08.775 08:28:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:08.775 08:28:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:08.775 08:28:42 -- common/autotest_common.sh@10 -- # set +x 00:32:08.775 08:28:42 -- nvmf/common.sh@469 -- # nvmfpid=65890 00:32:08.775 08:28:42 -- nvmf/common.sh@470 -- # waitforlisten 65890 00:32:08.775 08:28:42 -- common/autotest_common.sh@819 -- # '[' -z 65890 ']' 00:32:08.775 08:28:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:08.775 08:28:42 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:08.775 08:28:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:08.775 08:28:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:08.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:08.775 08:28:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:08.775 08:28:42 -- common/autotest_common.sh@10 -- # set +x 00:32:08.775 [2024-04-17 08:28:42.058448] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:32:08.775 [2024-04-17 08:28:42.058511] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:09.046 [2024-04-17 08:28:42.186022] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:09.046 [2024-04-17 08:28:42.288314] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:09.046 [2024-04-17 08:28:42.288463] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:09.046 [2024-04-17 08:28:42.288472] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:09.046 [2024-04-17 08:28:42.288478] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:09.046 [2024-04-17 08:28:42.288569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:09.046 [2024-04-17 08:28:42.288896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:09.046 [2024-04-17 08:28:42.289006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:09.046 [2024-04-17 08:28:42.289047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:09.982 08:28:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:09.982 08:28:42 -- common/autotest_common.sh@852 -- # return 0 00:32:09.982 08:28:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:09.982 08:28:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:09.982 08:28:42 -- common/autotest_common.sh@10 -- # set +x 00:32:09.982 08:28:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:09.982 08:28:43 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:32:09.982 08:28:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:09.982 08:28:43 -- common/autotest_common.sh@10 -- # set +x 00:32:09.982 08:28:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:09.983 08:28:43 -- target/rpc.sh@26 -- # stats='{ 00:32:09.983 "poll_groups": [ 00:32:09.983 { 00:32:09.983 "admin_qpairs": 0, 00:32:09.983 "completed_nvme_io": 0, 00:32:09.983 "current_admin_qpairs": 0, 00:32:09.983 "current_io_qpairs": 0, 00:32:09.983 "io_qpairs": 0, 00:32:09.983 "name": "nvmf_tgt_poll_group_0", 00:32:09.983 "pending_bdev_io": 0, 00:32:09.983 "transports": [] 00:32:09.983 }, 00:32:09.983 { 00:32:09.983 "admin_qpairs": 0, 00:32:09.983 "completed_nvme_io": 0, 00:32:09.983 "current_admin_qpairs": 0, 00:32:09.983 "current_io_qpairs": 0, 00:32:09.983 "io_qpairs": 0, 00:32:09.983 "name": "nvmf_tgt_poll_group_1", 00:32:09.983 "pending_bdev_io": 0, 00:32:09.983 "transports": [] 00:32:09.983 }, 00:32:09.983 { 00:32:09.983 "admin_qpairs": 0, 00:32:09.983 "completed_nvme_io": 0, 00:32:09.983 "current_admin_qpairs": 0, 00:32:09.983 "current_io_qpairs": 0, 00:32:09.983 "io_qpairs": 0, 00:32:09.983 "name": "nvmf_tgt_poll_group_2", 00:32:09.983 "pending_bdev_io": 0, 00:32:09.983 "transports": [] 00:32:09.983 }, 00:32:09.983 { 00:32:09.983 "admin_qpairs": 0, 00:32:09.983 "completed_nvme_io": 0, 00:32:09.983 "current_admin_qpairs": 0, 00:32:09.983 "current_io_qpairs": 0, 00:32:09.983 "io_qpairs": 0, 00:32:09.983 "name": "nvmf_tgt_poll_group_3", 00:32:09.983 "pending_bdev_io": 0, 00:32:09.983 "transports": [] 00:32:09.983 } 00:32:09.983 ], 00:32:09.983 "tick_rate": 2290000000 00:32:09.983 }' 00:32:09.983 08:28:43 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:32:09.983 08:28:43 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:32:09.983 08:28:43 -- target/rpc.sh@15 -- # wc -l 00:32:09.983 08:28:43 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:32:09.983 08:28:43 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:32:09.983 08:28:43 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:32:09.983 08:28:43 -- target/rpc.sh@29 -- # [[ null == null ]] 00:32:09.983 08:28:43 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:09.983 08:28:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:09.983 08:28:43 -- common/autotest_common.sh@10 -- # set +x 00:32:09.983 [2024-04-17 08:28:43.107100] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:09.983 08:28:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:09.983 08:28:43 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:32:09.983 08:28:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:09.983 08:28:43 -- common/autotest_common.sh@10 -- # set +x 00:32:09.983 08:28:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:09.983 08:28:43 -- target/rpc.sh@33 -- # stats='{ 00:32:09.983 "poll_groups": [ 00:32:09.983 { 00:32:09.983 "admin_qpairs": 0, 00:32:09.983 "completed_nvme_io": 0, 00:32:09.983 "current_admin_qpairs": 0, 00:32:09.983 "current_io_qpairs": 0, 00:32:09.983 "io_qpairs": 0, 00:32:09.983 "name": "nvmf_tgt_poll_group_0", 00:32:09.983 "pending_bdev_io": 0, 00:32:09.983 "transports": [ 00:32:09.983 { 00:32:09.983 "trtype": "TCP" 00:32:09.983 } 00:32:09.983 ] 00:32:09.983 }, 00:32:09.983 { 00:32:09.983 "admin_qpairs": 0, 00:32:09.983 "completed_nvme_io": 0, 00:32:09.983 "current_admin_qpairs": 0, 00:32:09.983 "current_io_qpairs": 0, 00:32:09.983 "io_qpairs": 0, 00:32:09.983 "name": "nvmf_tgt_poll_group_1", 00:32:09.983 "pending_bdev_io": 0, 00:32:09.983 "transports": [ 00:32:09.983 { 00:32:09.983 "trtype": "TCP" 00:32:09.983 } 00:32:09.983 ] 00:32:09.983 }, 00:32:09.983 { 00:32:09.983 "admin_qpairs": 0, 00:32:09.983 "completed_nvme_io": 0, 00:32:09.983 "current_admin_qpairs": 0, 00:32:09.983 "current_io_qpairs": 0, 00:32:09.983 "io_qpairs": 0, 00:32:09.983 "name": "nvmf_tgt_poll_group_2", 00:32:09.983 "pending_bdev_io": 0, 00:32:09.983 "transports": [ 00:32:09.983 { 00:32:09.983 "trtype": "TCP" 00:32:09.983 } 00:32:09.983 ] 00:32:09.983 }, 00:32:09.983 { 00:32:09.983 "admin_qpairs": 0, 00:32:09.983 "completed_nvme_io": 0, 00:32:09.983 "current_admin_qpairs": 0, 00:32:09.983 "current_io_qpairs": 0, 00:32:09.983 "io_qpairs": 0, 00:32:09.983 "name": "nvmf_tgt_poll_group_3", 00:32:09.983 "pending_bdev_io": 0, 00:32:09.983 "transports": [ 00:32:09.983 { 00:32:09.983 "trtype": "TCP" 00:32:09.983 } 00:32:09.983 ] 00:32:09.983 } 00:32:09.983 ], 00:32:09.983 "tick_rate": 2290000000 00:32:09.983 }' 00:32:09.983 08:28:43 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:32:09.983 08:28:43 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:32:09.983 08:28:43 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:32:09.983 08:28:43 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:32:09.983 08:28:43 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:32:09.983 08:28:43 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:32:09.983 08:28:43 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:32:09.983 08:28:43 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:32:09.983 08:28:43 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:32:09.983 08:28:43 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:32:09.983 08:28:43 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:32:09.983 08:28:43 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:32:09.983 08:28:43 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:32:09.983 08:28:43 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:32:09.983 08:28:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:09.983 08:28:43 -- common/autotest_common.sh@10 -- # set +x 00:32:09.983 Malloc1 00:32:09.983 08:28:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:09.983 08:28:43 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:09.983 08:28:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:09.983 08:28:43 -- common/autotest_common.sh@10 -- # set +x 00:32:09.983 08:28:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:09.983 08:28:43 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:09.983 08:28:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:09.983 08:28:43 -- common/autotest_common.sh@10 -- # set +x 00:32:09.983 08:28:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:09.983 08:28:43 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:32:09.983 08:28:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:09.983 08:28:43 -- common/autotest_common.sh@10 -- # set +x 00:32:09.983 08:28:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:09.983 08:28:43 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:09.983 08:28:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:09.983 08:28:43 -- common/autotest_common.sh@10 -- # set +x 00:32:09.983 [2024-04-17 08:28:43.273536] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:09.983 08:28:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:09.983 08:28:43 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 -a 10.0.0.2 -s 4420 00:32:09.983 08:28:43 -- common/autotest_common.sh@640 -- # local es=0 00:32:09.983 08:28:43 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 -a 10.0.0.2 -s 4420 00:32:09.983 08:28:43 -- common/autotest_common.sh@628 -- # local arg=nvme 00:32:09.983 08:28:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:09.983 08:28:43 -- common/autotest_common.sh@632 -- # type -t nvme 00:32:09.983 08:28:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:09.983 08:28:43 -- common/autotest_common.sh@634 -- # type -P nvme 00:32:09.983 08:28:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:09.983 08:28:43 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:32:09.983 08:28:43 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:32:09.983 08:28:43 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 -a 10.0.0.2 -s 4420 00:32:09.983 [2024-04-17 08:28:43.305934] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2' 00:32:09.983 Failed to write to /dev/nvme-fabrics: Input/output error 00:32:09.983 could not add new controller: failed to write to nvme-fabrics device 00:32:10.242 08:28:43 -- common/autotest_common.sh@643 -- # es=1 00:32:10.242 08:28:43 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:32:10.242 08:28:43 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:32:10.242 08:28:43 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:32:10.242 08:28:43 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:32:10.242 08:28:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:10.242 08:28:43 -- common/autotest_common.sh@10 -- # set +x 00:32:10.242 08:28:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:10.242 08:28:43 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:10.242 08:28:43 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:32:10.242 08:28:43 -- common/autotest_common.sh@1177 -- # local i=0 00:32:10.242 08:28:43 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:32:10.242 08:28:43 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:32:10.242 08:28:43 -- common/autotest_common.sh@1184 -- # sleep 2 00:32:12.772 08:28:45 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:32:12.772 08:28:45 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:32:12.772 08:28:45 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:32:12.772 08:28:45 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:32:12.772 08:28:45 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:32:12.772 08:28:45 -- common/autotest_common.sh@1187 -- # return 0 00:32:12.772 08:28:45 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:12.772 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:12.772 08:28:45 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:12.772 08:28:45 -- common/autotest_common.sh@1198 -- # local i=0 00:32:12.772 08:28:45 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:12.772 08:28:45 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:32:12.772 08:28:45 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:32:12.772 08:28:45 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:12.772 08:28:45 -- common/autotest_common.sh@1210 -- # return 0 00:32:12.772 08:28:45 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:32:12.772 08:28:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:12.772 08:28:45 -- common/autotest_common.sh@10 -- # set +x 00:32:12.772 08:28:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:12.772 08:28:45 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:12.772 08:28:45 -- common/autotest_common.sh@640 -- # local es=0 00:32:12.772 08:28:45 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:12.772 08:28:45 -- common/autotest_common.sh@628 -- # local arg=nvme 00:32:12.772 08:28:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:12.772 08:28:45 -- common/autotest_common.sh@632 -- # type -t nvme 00:32:12.772 08:28:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:12.772 08:28:45 -- common/autotest_common.sh@634 -- # type -P nvme 00:32:12.772 08:28:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:12.772 08:28:45 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:32:12.772 08:28:45 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:32:12.772 08:28:45 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:12.772 [2024-04-17 08:28:45.594141] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2' 00:32:12.772 Failed to write to /dev/nvme-fabrics: Input/output error 00:32:12.772 could not add new controller: failed to write to nvme-fabrics device 00:32:12.772 08:28:45 -- common/autotest_common.sh@643 -- # es=1 00:32:12.772 08:28:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:32:12.772 08:28:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:32:12.772 08:28:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:32:12.772 08:28:45 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:32:12.772 08:28:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:12.772 08:28:45 -- common/autotest_common.sh@10 -- # set +x 00:32:12.772 08:28:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:12.772 08:28:45 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:12.772 08:28:45 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:32:12.772 08:28:45 -- common/autotest_common.sh@1177 -- # local i=0 00:32:12.772 08:28:45 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:32:12.772 08:28:45 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:32:12.772 08:28:45 -- common/autotest_common.sh@1184 -- # sleep 2 00:32:14.682 08:28:47 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:32:14.682 08:28:47 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:32:14.682 08:28:47 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:32:14.682 08:28:47 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:32:14.682 08:28:47 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:32:14.682 08:28:47 -- common/autotest_common.sh@1187 -- # return 0 00:32:14.682 08:28:47 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:14.682 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:14.682 08:28:47 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:14.682 08:28:47 -- common/autotest_common.sh@1198 -- # local i=0 00:32:14.682 08:28:47 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:14.682 08:28:47 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:32:14.682 08:28:47 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:14.682 08:28:47 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:32:14.682 08:28:47 -- common/autotest_common.sh@1210 -- # return 0 00:32:14.682 08:28:47 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:14.682 08:28:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:14.682 08:28:47 -- common/autotest_common.sh@10 -- # set +x 00:32:14.682 08:28:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:14.682 08:28:47 -- target/rpc.sh@81 -- # seq 1 5 00:32:14.682 08:28:47 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:32:14.682 08:28:47 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:32:14.682 08:28:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:14.682 08:28:47 -- common/autotest_common.sh@10 -- # set +x 00:32:14.682 08:28:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:14.682 08:28:47 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:14.682 08:28:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:14.682 08:28:47 -- common/autotest_common.sh@10 -- # set +x 00:32:14.683 [2024-04-17 08:28:47.883040] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:14.683 08:28:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:14.683 08:28:47 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:32:14.683 08:28:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:14.683 08:28:47 -- common/autotest_common.sh@10 -- # set +x 00:32:14.683 08:28:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:14.683 08:28:47 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:32:14.683 08:28:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:14.683 08:28:47 -- common/autotest_common.sh@10 -- # set +x 00:32:14.683 08:28:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:14.683 08:28:47 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:14.941 08:28:48 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:32:14.941 08:28:48 -- common/autotest_common.sh@1177 -- # local i=0 00:32:14.941 08:28:48 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:32:14.941 08:28:48 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:32:14.941 08:28:48 -- common/autotest_common.sh@1184 -- # sleep 2 00:32:16.845 08:28:50 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:32:16.845 08:28:50 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:32:16.845 08:28:50 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:32:16.845 08:28:50 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:32:16.845 08:28:50 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:32:16.845 08:28:50 -- common/autotest_common.sh@1187 -- # return 0 00:32:16.845 08:28:50 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:16.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:16.845 08:28:50 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:16.845 08:28:50 -- common/autotest_common.sh@1198 -- # local i=0 00:32:16.845 08:28:50 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:16.845 08:28:50 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:32:16.845 08:28:50 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:32:16.845 08:28:50 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:16.845 08:28:50 -- common/autotest_common.sh@1210 -- # return 0 00:32:16.845 08:28:50 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:16.845 08:28:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:16.845 08:28:50 -- common/autotest_common.sh@10 -- # set +x 00:32:16.845 08:28:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:16.845 08:28:50 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:16.845 08:28:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:16.845 08:28:50 -- common/autotest_common.sh@10 -- # set +x 00:32:16.845 08:28:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:16.845 08:28:50 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:32:16.845 08:28:50 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:32:16.845 08:28:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:16.845 08:28:50 -- common/autotest_common.sh@10 -- # set +x 00:32:16.845 08:28:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:16.845 08:28:50 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:16.845 08:28:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:16.845 08:28:50 -- common/autotest_common.sh@10 -- # set +x 00:32:16.845 [2024-04-17 08:28:50.173544] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:17.105 08:28:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:17.105 08:28:50 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:32:17.105 08:28:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:17.105 08:28:50 -- common/autotest_common.sh@10 -- # set +x 00:32:17.105 08:28:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:17.105 08:28:50 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:32:17.105 08:28:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:17.105 08:28:50 -- common/autotest_common.sh@10 -- # set +x 00:32:17.105 08:28:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:17.105 08:28:50 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:17.105 08:28:50 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:32:17.105 08:28:50 -- common/autotest_common.sh@1177 -- # local i=0 00:32:17.105 08:28:50 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:32:17.105 08:28:50 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:32:17.105 08:28:50 -- common/autotest_common.sh@1184 -- # sleep 2 00:32:19.638 08:28:52 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:32:19.638 08:28:52 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:32:19.638 08:28:52 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:32:19.638 08:28:52 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:32:19.638 08:28:52 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:32:19.638 08:28:52 -- common/autotest_common.sh@1187 -- # return 0 00:32:19.638 08:28:52 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:19.638 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:19.638 08:28:52 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:19.638 08:28:52 -- common/autotest_common.sh@1198 -- # local i=0 00:32:19.638 08:28:52 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:32:19.638 08:28:52 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:19.638 08:28:52 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:32:19.638 08:28:52 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:19.638 08:28:52 -- common/autotest_common.sh@1210 -- # return 0 00:32:19.638 08:28:52 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:19.638 08:28:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:19.638 08:28:52 -- common/autotest_common.sh@10 -- # set +x 00:32:19.638 08:28:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:19.638 08:28:52 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:19.638 08:28:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:19.638 08:28:52 -- common/autotest_common.sh@10 -- # set +x 00:32:19.638 08:28:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:19.638 08:28:52 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:32:19.638 08:28:52 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:32:19.638 08:28:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:19.638 08:28:52 -- common/autotest_common.sh@10 -- # set +x 00:32:19.638 08:28:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:19.638 08:28:52 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:19.638 08:28:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:19.638 08:28:52 -- common/autotest_common.sh@10 -- # set +x 00:32:19.638 [2024-04-17 08:28:52.483857] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:19.638 08:28:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:19.638 08:28:52 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:32:19.638 08:28:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:19.638 08:28:52 -- common/autotest_common.sh@10 -- # set +x 00:32:19.638 08:28:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:19.638 08:28:52 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:32:19.638 08:28:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:19.638 08:28:52 -- common/autotest_common.sh@10 -- # set +x 00:32:19.638 08:28:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:19.638 08:28:52 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:19.638 08:28:52 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:32:19.638 08:28:52 -- common/autotest_common.sh@1177 -- # local i=0 00:32:19.638 08:28:52 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:32:19.638 08:28:52 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:32:19.638 08:28:52 -- common/autotest_common.sh@1184 -- # sleep 2 00:32:21.541 08:28:54 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:32:21.541 08:28:54 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:32:21.541 08:28:54 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:32:21.541 08:28:54 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:32:21.541 08:28:54 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:32:21.541 08:28:54 -- common/autotest_common.sh@1187 -- # return 0 00:32:21.541 08:28:54 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:21.541 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:21.541 08:28:54 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:21.541 08:28:54 -- common/autotest_common.sh@1198 -- # local i=0 00:32:21.541 08:28:54 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:32:21.541 08:28:54 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:21.541 08:28:54 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:32:21.541 08:28:54 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:21.541 08:28:54 -- common/autotest_common.sh@1210 -- # return 0 00:32:21.541 08:28:54 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:21.541 08:28:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:21.541 08:28:54 -- common/autotest_common.sh@10 -- # set +x 00:32:21.541 08:28:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:21.541 08:28:54 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:21.541 08:28:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:21.541 08:28:54 -- common/autotest_common.sh@10 -- # set +x 00:32:21.541 08:28:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:21.542 08:28:54 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:32:21.542 08:28:54 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:32:21.542 08:28:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:21.542 08:28:54 -- common/autotest_common.sh@10 -- # set +x 00:32:21.542 08:28:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:21.542 08:28:54 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:21.542 08:28:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:21.542 08:28:54 -- common/autotest_common.sh@10 -- # set +x 00:32:21.542 [2024-04-17 08:28:54.774340] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:21.542 08:28:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:21.542 08:28:54 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:32:21.542 08:28:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:21.542 08:28:54 -- common/autotest_common.sh@10 -- # set +x 00:32:21.542 08:28:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:21.542 08:28:54 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:32:21.542 08:28:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:21.542 08:28:54 -- common/autotest_common.sh@10 -- # set +x 00:32:21.542 08:28:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:21.542 08:28:54 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:21.800 08:28:54 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:32:21.800 08:28:54 -- common/autotest_common.sh@1177 -- # local i=0 00:32:21.800 08:28:54 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:32:21.800 08:28:54 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:32:21.800 08:28:54 -- common/autotest_common.sh@1184 -- # sleep 2 00:32:23.705 08:28:56 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:32:23.705 08:28:56 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:32:23.705 08:28:56 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:32:23.705 08:28:56 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:32:23.705 08:28:56 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:32:23.705 08:28:56 -- common/autotest_common.sh@1187 -- # return 0 00:32:23.705 08:28:56 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:23.705 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:23.705 08:28:57 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:23.705 08:28:57 -- common/autotest_common.sh@1198 -- # local i=0 00:32:23.705 08:28:57 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:23.705 08:28:57 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:32:23.705 08:28:57 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:32:23.705 08:28:57 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:23.963 08:28:57 -- common/autotest_common.sh@1210 -- # return 0 00:32:23.963 08:28:57 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:23.963 08:28:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:23.963 08:28:57 -- common/autotest_common.sh@10 -- # set +x 00:32:23.963 08:28:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:23.963 08:28:57 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:23.963 08:28:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:23.963 08:28:57 -- common/autotest_common.sh@10 -- # set +x 00:32:23.963 08:28:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:23.963 08:28:57 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:32:23.963 08:28:57 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:32:23.963 08:28:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:23.963 08:28:57 -- common/autotest_common.sh@10 -- # set +x 00:32:23.963 08:28:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:23.963 08:28:57 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:23.963 08:28:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:23.963 08:28:57 -- common/autotest_common.sh@10 -- # set +x 00:32:23.963 [2024-04-17 08:28:57.077499] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:23.963 08:28:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:23.963 08:28:57 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:32:23.963 08:28:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:23.963 08:28:57 -- common/autotest_common.sh@10 -- # set +x 00:32:23.963 08:28:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:23.963 08:28:57 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:32:23.963 08:28:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:23.963 08:28:57 -- common/autotest_common.sh@10 -- # set +x 00:32:23.963 08:28:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:23.963 08:28:57 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:23.963 08:28:57 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:32:23.963 08:28:57 -- common/autotest_common.sh@1177 -- # local i=0 00:32:23.963 08:28:57 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:32:23.963 08:28:57 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:32:23.963 08:28:57 -- common/autotest_common.sh@1184 -- # sleep 2 00:32:25.976 08:28:59 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:32:25.976 08:28:59 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:32:25.976 08:28:59 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:32:25.976 08:28:59 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:32:25.976 08:28:59 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:32:25.976 08:28:59 -- common/autotest_common.sh@1187 -- # return 0 00:32:25.976 08:28:59 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:26.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:26.239 08:28:59 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:26.239 08:28:59 -- common/autotest_common.sh@1198 -- # local i=0 00:32:26.239 08:28:59 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:32:26.239 08:28:59 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:26.239 08:28:59 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:32:26.239 08:28:59 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:26.239 08:28:59 -- common/autotest_common.sh@1210 -- # return 0 00:32:26.239 08:28:59 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:26.239 08:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.239 08:28:59 -- common/autotest_common.sh@10 -- # set +x 00:32:26.239 08:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.239 08:28:59 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:26.239 08:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.239 08:28:59 -- common/autotest_common.sh@10 -- # set +x 00:32:26.239 08:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.239 08:28:59 -- target/rpc.sh@99 -- # seq 1 5 00:32:26.239 08:28:59 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:32:26.239 08:28:59 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:32:26.239 08:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.239 08:28:59 -- common/autotest_common.sh@10 -- # set +x 00:32:26.239 08:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.239 08:28:59 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:26.239 08:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.239 08:28:59 -- common/autotest_common.sh@10 -- # set +x 00:32:26.239 [2024-04-17 08:28:59.476664] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:26.239 08:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.239 08:28:59 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:26.239 08:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.239 08:28:59 -- common/autotest_common.sh@10 -- # set +x 00:32:26.239 08:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.239 08:28:59 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:32:26.239 08:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.239 08:28:59 -- common/autotest_common.sh@10 -- # set +x 00:32:26.239 08:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.239 08:28:59 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:26.239 08:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.239 08:28:59 -- common/autotest_common.sh@10 -- # set +x 00:32:26.239 08:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.239 08:28:59 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:26.239 08:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.239 08:28:59 -- common/autotest_common.sh@10 -- # set +x 00:32:26.239 08:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.239 08:28:59 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:32:26.239 08:28:59 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:32:26.239 08:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.239 08:28:59 -- common/autotest_common.sh@10 -- # set +x 00:32:26.239 08:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.239 08:28:59 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:26.239 08:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.239 08:28:59 -- common/autotest_common.sh@10 -- # set +x 00:32:26.239 [2024-04-17 08:28:59.524661] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:26.239 08:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.239 08:28:59 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:26.239 08:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.239 08:28:59 -- common/autotest_common.sh@10 -- # set +x 00:32:26.239 08:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.239 08:28:59 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:32:26.239 08:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.239 08:28:59 -- common/autotest_common.sh@10 -- # set +x 00:32:26.239 08:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.239 08:28:59 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:26.239 08:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.239 08:28:59 -- common/autotest_common.sh@10 -- # set +x 00:32:26.240 08:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.240 08:28:59 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:26.240 08:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.240 08:28:59 -- common/autotest_common.sh@10 -- # set +x 00:32:26.240 08:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.240 08:28:59 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:32:26.240 08:28:59 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:32:26.240 08:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.240 08:28:59 -- common/autotest_common.sh@10 -- # set +x 00:32:26.240 08:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.240 08:28:59 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:26.240 08:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.240 08:28:59 -- common/autotest_common.sh@10 -- # set +x 00:32:26.504 [2024-04-17 08:28:59.572702] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:26.504 08:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.504 08:28:59 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:26.504 08:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.504 08:28:59 -- common/autotest_common.sh@10 -- # set +x 00:32:26.504 08:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.504 08:28:59 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:32:26.504 08:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.504 08:28:59 -- common/autotest_common.sh@10 -- # set +x 00:32:26.504 08:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.504 08:28:59 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:26.504 08:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.504 08:28:59 -- common/autotest_common.sh@10 -- # set +x 00:32:26.504 08:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.504 08:28:59 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:26.504 08:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.504 08:28:59 -- common/autotest_common.sh@10 -- # set +x 00:32:26.504 08:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.504 08:28:59 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:32:26.504 08:28:59 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:32:26.504 08:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.504 08:28:59 -- common/autotest_common.sh@10 -- # set +x 00:32:26.504 08:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.504 08:28:59 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:26.504 08:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.504 08:28:59 -- common/autotest_common.sh@10 -- # set +x 00:32:26.504 [2024-04-17 08:28:59.628743] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:26.504 08:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.504 08:28:59 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:26.504 08:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.504 08:28:59 -- common/autotest_common.sh@10 -- # set +x 00:32:26.504 08:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.504 08:28:59 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:32:26.504 08:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.504 08:28:59 -- common/autotest_common.sh@10 -- # set +x 00:32:26.504 08:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.504 08:28:59 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:26.504 08:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.504 08:28:59 -- common/autotest_common.sh@10 -- # set +x 00:32:26.504 08:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.504 08:28:59 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:26.504 08:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.504 08:28:59 -- common/autotest_common.sh@10 -- # set +x 00:32:26.504 08:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.504 08:28:59 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:32:26.504 08:28:59 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:32:26.504 08:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.504 08:28:59 -- common/autotest_common.sh@10 -- # set +x 00:32:26.504 08:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.504 08:28:59 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:26.504 08:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.504 08:28:59 -- common/autotest_common.sh@10 -- # set +x 00:32:26.504 [2024-04-17 08:28:59.684808] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:26.504 08:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.504 08:28:59 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:26.504 08:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.504 08:28:59 -- common/autotest_common.sh@10 -- # set +x 00:32:26.504 08:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.504 08:28:59 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:32:26.504 08:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.504 08:28:59 -- common/autotest_common.sh@10 -- # set +x 00:32:26.504 08:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.504 08:28:59 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:26.504 08:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.504 08:28:59 -- common/autotest_common.sh@10 -- # set +x 00:32:26.504 08:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.504 08:28:59 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:26.504 08:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.504 08:28:59 -- common/autotest_common.sh@10 -- # set +x 00:32:26.504 08:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.504 08:28:59 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:32:26.504 08:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.504 08:28:59 -- common/autotest_common.sh@10 -- # set +x 00:32:26.504 08:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.504 08:28:59 -- target/rpc.sh@110 -- # stats='{ 00:32:26.504 "poll_groups": [ 00:32:26.504 { 00:32:26.504 "admin_qpairs": 2, 00:32:26.504 "completed_nvme_io": 68, 00:32:26.504 "current_admin_qpairs": 0, 00:32:26.504 "current_io_qpairs": 0, 00:32:26.504 "io_qpairs": 16, 00:32:26.504 "name": "nvmf_tgt_poll_group_0", 00:32:26.504 "pending_bdev_io": 0, 00:32:26.504 "transports": [ 00:32:26.504 { 00:32:26.504 "trtype": "TCP" 00:32:26.504 } 00:32:26.504 ] 00:32:26.504 }, 00:32:26.504 { 00:32:26.504 "admin_qpairs": 3, 00:32:26.504 "completed_nvme_io": 116, 00:32:26.504 "current_admin_qpairs": 0, 00:32:26.504 "current_io_qpairs": 0, 00:32:26.504 "io_qpairs": 17, 00:32:26.504 "name": "nvmf_tgt_poll_group_1", 00:32:26.504 "pending_bdev_io": 0, 00:32:26.504 "transports": [ 00:32:26.504 { 00:32:26.504 "trtype": "TCP" 00:32:26.504 } 00:32:26.504 ] 00:32:26.504 }, 00:32:26.504 { 00:32:26.504 "admin_qpairs": 1, 00:32:26.505 "completed_nvme_io": 167, 00:32:26.505 "current_admin_qpairs": 0, 00:32:26.505 "current_io_qpairs": 0, 00:32:26.505 "io_qpairs": 19, 00:32:26.505 "name": "nvmf_tgt_poll_group_2", 00:32:26.505 "pending_bdev_io": 0, 00:32:26.505 "transports": [ 00:32:26.505 { 00:32:26.505 "trtype": "TCP" 00:32:26.505 } 00:32:26.505 ] 00:32:26.505 }, 00:32:26.505 { 00:32:26.505 "admin_qpairs": 1, 00:32:26.505 "completed_nvme_io": 69, 00:32:26.505 "current_admin_qpairs": 0, 00:32:26.505 "current_io_qpairs": 0, 00:32:26.505 "io_qpairs": 18, 00:32:26.505 "name": "nvmf_tgt_poll_group_3", 00:32:26.505 "pending_bdev_io": 0, 00:32:26.505 "transports": [ 00:32:26.505 { 00:32:26.505 "trtype": "TCP" 00:32:26.505 } 00:32:26.505 ] 00:32:26.505 } 00:32:26.505 ], 00:32:26.505 "tick_rate": 2290000000 00:32:26.505 }' 00:32:26.505 08:28:59 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:32:26.505 08:28:59 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:32:26.505 08:28:59 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:32:26.505 08:28:59 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:32:26.505 08:28:59 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:32:26.505 08:28:59 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:32:26.505 08:28:59 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:32:26.505 08:28:59 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:32:26.505 08:28:59 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:32:26.505 08:28:59 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:32:26.505 08:28:59 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:32:26.505 08:28:59 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:32:26.505 08:28:59 -- target/rpc.sh@123 -- # nvmftestfini 00:32:26.505 08:28:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:26.505 08:28:59 -- nvmf/common.sh@116 -- # sync 00:32:26.764 08:28:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:26.764 08:28:59 -- nvmf/common.sh@119 -- # set +e 00:32:26.764 08:28:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:26.764 08:28:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:26.764 rmmod nvme_tcp 00:32:26.764 rmmod nvme_fabrics 00:32:26.764 rmmod nvme_keyring 00:32:26.765 08:28:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:26.765 08:28:59 -- nvmf/common.sh@123 -- # set -e 00:32:26.765 08:28:59 -- nvmf/common.sh@124 -- # return 0 00:32:26.765 08:28:59 -- nvmf/common.sh@477 -- # '[' -n 65890 ']' 00:32:26.765 08:28:59 -- nvmf/common.sh@478 -- # killprocess 65890 00:32:26.765 08:28:59 -- common/autotest_common.sh@926 -- # '[' -z 65890 ']' 00:32:26.765 08:28:59 -- common/autotest_common.sh@930 -- # kill -0 65890 00:32:26.765 08:28:59 -- common/autotest_common.sh@931 -- # uname 00:32:26.765 08:28:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:26.765 08:28:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65890 00:32:26.765 08:28:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:26.765 killing process with pid 65890 00:32:26.765 08:28:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:26.765 08:28:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65890' 00:32:26.765 08:28:59 -- common/autotest_common.sh@945 -- # kill 65890 00:32:26.765 08:28:59 -- common/autotest_common.sh@950 -- # wait 65890 00:32:27.025 08:29:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:27.025 08:29:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:27.025 08:29:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:27.025 08:29:00 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:27.025 08:29:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:27.025 08:29:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:27.025 08:29:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:27.025 08:29:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:27.025 08:29:00 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:32:27.025 00:32:27.025 real 0m18.712s 00:32:27.025 user 1m11.386s 00:32:27.025 sys 0m1.430s 00:32:27.025 08:29:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:27.025 08:29:00 -- common/autotest_common.sh@10 -- # set +x 00:32:27.025 ************************************ 00:32:27.025 END TEST nvmf_rpc 00:32:27.025 ************************************ 00:32:27.025 08:29:00 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:32:27.025 08:29:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:27.025 08:29:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:27.025 08:29:00 -- common/autotest_common.sh@10 -- # set +x 00:32:27.025 ************************************ 00:32:27.025 START TEST nvmf_invalid 00:32:27.025 ************************************ 00:32:27.025 08:29:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:32:27.289 * Looking for test storage... 00:32:27.289 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:32:27.289 08:29:00 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:27.289 08:29:00 -- nvmf/common.sh@7 -- # uname -s 00:32:27.289 08:29:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:27.289 08:29:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:27.289 08:29:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:27.289 08:29:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:27.289 08:29:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:27.289 08:29:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:27.289 08:29:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:27.289 08:29:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:27.289 08:29:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:27.289 08:29:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:27.289 08:29:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:32:27.289 08:29:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:32:27.289 08:29:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:27.289 08:29:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:27.289 08:29:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:27.289 08:29:00 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:27.289 08:29:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:27.289 08:29:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:27.289 08:29:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:27.289 08:29:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.289 08:29:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.289 08:29:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.289 08:29:00 -- paths/export.sh@5 -- # export PATH 00:32:27.289 08:29:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.289 08:29:00 -- nvmf/common.sh@46 -- # : 0 00:32:27.289 08:29:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:27.289 08:29:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:27.289 08:29:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:27.289 08:29:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:27.289 08:29:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:27.289 08:29:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:27.289 08:29:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:27.289 08:29:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:27.289 08:29:00 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:32:27.289 08:29:00 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:27.289 08:29:00 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:32:27.289 08:29:00 -- target/invalid.sh@14 -- # target=foobar 00:32:27.289 08:29:00 -- target/invalid.sh@16 -- # RANDOM=0 00:32:27.289 08:29:00 -- target/invalid.sh@34 -- # nvmftestinit 00:32:27.289 08:29:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:27.289 08:29:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:27.289 08:29:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:27.289 08:29:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:27.289 08:29:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:27.289 08:29:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:27.289 08:29:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:27.289 08:29:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:27.289 08:29:00 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:32:27.289 08:29:00 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:32:27.289 08:29:00 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:32:27.289 08:29:00 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:32:27.290 08:29:00 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:32:27.290 08:29:00 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:32:27.290 08:29:00 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:27.290 08:29:00 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:27.290 08:29:00 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:32:27.290 08:29:00 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:32:27.290 08:29:00 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:27.290 08:29:00 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:27.290 08:29:00 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:27.290 08:29:00 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:27.290 08:29:00 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:27.290 08:29:00 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:27.290 08:29:00 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:27.290 08:29:00 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:27.290 08:29:00 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:32:27.290 08:29:00 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:32:27.290 Cannot find device "nvmf_tgt_br" 00:32:27.290 08:29:00 -- nvmf/common.sh@154 -- # true 00:32:27.290 08:29:00 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:32:27.290 Cannot find device "nvmf_tgt_br2" 00:32:27.290 08:29:00 -- nvmf/common.sh@155 -- # true 00:32:27.290 08:29:00 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:32:27.290 08:29:00 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:32:27.290 Cannot find device "nvmf_tgt_br" 00:32:27.290 08:29:00 -- nvmf/common.sh@157 -- # true 00:32:27.290 08:29:00 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:32:27.290 Cannot find device "nvmf_tgt_br2" 00:32:27.290 08:29:00 -- nvmf/common.sh@158 -- # true 00:32:27.290 08:29:00 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:32:27.290 08:29:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:32:27.290 08:29:00 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:27.290 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:27.290 08:29:00 -- nvmf/common.sh@161 -- # true 00:32:27.290 08:29:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:27.290 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:27.290 08:29:00 -- nvmf/common.sh@162 -- # true 00:32:27.290 08:29:00 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:32:27.290 08:29:00 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:27.290 08:29:00 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:27.290 08:29:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:27.290 08:29:00 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:27.556 08:29:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:27.556 08:29:00 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:27.556 08:29:00 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:32:27.556 08:29:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:32:27.556 08:29:00 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:32:27.556 08:29:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:32:27.556 08:29:00 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:32:27.556 08:29:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:32:27.556 08:29:00 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:27.556 08:29:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:27.556 08:29:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:27.556 08:29:00 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:32:27.556 08:29:00 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:32:27.556 08:29:00 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:32:27.556 08:29:00 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:27.556 08:29:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:27.556 08:29:00 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:27.556 08:29:00 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:27.556 08:29:00 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:32:27.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:27.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:32:27.556 00:32:27.556 --- 10.0.0.2 ping statistics --- 00:32:27.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:27.556 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:32:27.556 08:29:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:32:27.556 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:27.556 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:32:27.556 00:32:27.556 --- 10.0.0.3 ping statistics --- 00:32:27.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:27.556 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:32:27.556 08:29:00 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:27.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:27.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:32:27.556 00:32:27.556 --- 10.0.0.1 ping statistics --- 00:32:27.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:27.556 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:32:27.556 08:29:00 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:27.556 08:29:00 -- nvmf/common.sh@421 -- # return 0 00:32:27.556 08:29:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:27.556 08:29:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:27.556 08:29:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:27.556 08:29:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:27.556 08:29:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:27.556 08:29:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:27.556 08:29:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:27.556 08:29:00 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:32:27.556 08:29:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:27.556 08:29:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:27.556 08:29:00 -- common/autotest_common.sh@10 -- # set +x 00:32:27.556 08:29:00 -- nvmf/common.sh@469 -- # nvmfpid=66403 00:32:27.556 08:29:00 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:27.556 08:29:00 -- nvmf/common.sh@470 -- # waitforlisten 66403 00:32:27.556 08:29:00 -- common/autotest_common.sh@819 -- # '[' -z 66403 ']' 00:32:27.556 08:29:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:27.556 08:29:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:27.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:27.556 08:29:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:27.556 08:29:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:27.556 08:29:00 -- common/autotest_common.sh@10 -- # set +x 00:32:27.556 [2024-04-17 08:29:00.834293] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:32:27.556 [2024-04-17 08:29:00.834377] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:27.824 [2024-04-17 08:29:00.974853] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:27.824 [2024-04-17 08:29:01.082485] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:27.824 [2024-04-17 08:29:01.082632] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:27.824 [2024-04-17 08:29:01.082639] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:27.824 [2024-04-17 08:29:01.082645] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:27.824 [2024-04-17 08:29:01.082790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:27.824 [2024-04-17 08:29:01.082949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:27.824 [2024-04-17 08:29:01.083127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:27.824 [2024-04-17 08:29:01.083134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:28.764 08:29:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:28.764 08:29:01 -- common/autotest_common.sh@852 -- # return 0 00:32:28.764 08:29:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:28.764 08:29:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:28.764 08:29:01 -- common/autotest_common.sh@10 -- # set +x 00:32:28.764 08:29:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:28.764 08:29:01 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:32:28.764 08:29:01 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode18000 00:32:28.764 [2024-04-17 08:29:02.012183] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:32:28.764 08:29:02 -- target/invalid.sh@40 -- # out='2024/04/17 08:29:02 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode18000 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:32:28.764 request: 00:32:28.764 { 00:32:28.764 "method": "nvmf_create_subsystem", 00:32:28.764 "params": { 00:32:28.764 "nqn": "nqn.2016-06.io.spdk:cnode18000", 00:32:28.764 "tgt_name": "foobar" 00:32:28.764 } 00:32:28.764 } 00:32:28.764 Got JSON-RPC error response 00:32:28.764 GoRPCClient: error on JSON-RPC call' 00:32:28.764 08:29:02 -- target/invalid.sh@41 -- # [[ 2024/04/17 08:29:02 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode18000 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:32:28.764 request: 00:32:28.764 { 00:32:28.764 "method": "nvmf_create_subsystem", 00:32:28.764 "params": { 00:32:28.764 "nqn": "nqn.2016-06.io.spdk:cnode18000", 00:32:28.764 "tgt_name": "foobar" 00:32:28.764 } 00:32:28.764 } 00:32:28.764 Got JSON-RPC error response 00:32:28.764 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:32:28.764 08:29:02 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:32:28.764 08:29:02 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode28356 00:32:29.031 [2024-04-17 08:29:02.319935] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28356: invalid serial number 'SPDKISFASTANDAWESOME' 00:32:29.031 08:29:02 -- target/invalid.sh@45 -- # out='2024/04/17 08:29:02 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode28356 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:32:29.031 request: 00:32:29.031 { 00:32:29.031 "method": "nvmf_create_subsystem", 00:32:29.031 "params": { 00:32:29.031 "nqn": "nqn.2016-06.io.spdk:cnode28356", 00:32:29.031 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:32:29.031 } 00:32:29.031 } 00:32:29.031 Got JSON-RPC error response 00:32:29.031 GoRPCClient: error on JSON-RPC call' 00:32:29.031 08:29:02 -- target/invalid.sh@46 -- # [[ 2024/04/17 08:29:02 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode28356 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:32:29.031 request: 00:32:29.031 { 00:32:29.031 "method": "nvmf_create_subsystem", 00:32:29.031 "params": { 00:32:29.031 "nqn": "nqn.2016-06.io.spdk:cnode28356", 00:32:29.031 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:32:29.031 } 00:32:29.031 } 00:32:29.031 Got JSON-RPC error response 00:32:29.031 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:32:29.031 08:29:02 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:32:29.031 08:29:02 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode17701 00:32:29.292 [2024-04-17 08:29:02.571737] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17701: invalid model number 'SPDK_Controller' 00:32:29.292 08:29:02 -- target/invalid.sh@50 -- # out='2024/04/17 08:29:02 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode17701], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:32:29.292 request: 00:32:29.292 { 00:32:29.292 "method": "nvmf_create_subsystem", 00:32:29.292 "params": { 00:32:29.292 "nqn": "nqn.2016-06.io.spdk:cnode17701", 00:32:29.292 "model_number": "SPDK_Controller\u001f" 00:32:29.292 } 00:32:29.292 } 00:32:29.292 Got JSON-RPC error response 00:32:29.292 GoRPCClient: error on JSON-RPC call' 00:32:29.292 08:29:02 -- target/invalid.sh@51 -- # [[ 2024/04/17 08:29:02 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode17701], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:32:29.292 request: 00:32:29.292 { 00:32:29.292 "method": "nvmf_create_subsystem", 00:32:29.292 "params": { 00:32:29.292 "nqn": "nqn.2016-06.io.spdk:cnode17701", 00:32:29.292 "model_number": "SPDK_Controller\u001f" 00:32:29.292 } 00:32:29.292 } 00:32:29.292 Got JSON-RPC error response 00:32:29.292 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:32:29.292 08:29:02 -- target/invalid.sh@54 -- # gen_random_s 21 00:32:29.292 08:29:02 -- target/invalid.sh@19 -- # local length=21 ll 00:32:29.292 08:29:02 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:32:29.292 08:29:02 -- target/invalid.sh@21 -- # local chars 00:32:29.292 08:29:02 -- target/invalid.sh@22 -- # local string 00:32:29.292 08:29:02 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:32:29.292 08:29:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:29.292 08:29:02 -- target/invalid.sh@25 -- # printf %x 78 00:32:29.292 08:29:02 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:32:29.292 08:29:02 -- target/invalid.sh@25 -- # string+=N 00:32:29.292 08:29:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:29.292 08:29:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:29.292 08:29:02 -- target/invalid.sh@25 -- # printf %x 63 00:32:29.292 08:29:02 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:32:29.292 08:29:02 -- target/invalid.sh@25 -- # string+='?' 00:32:29.292 08:29:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:29.292 08:29:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:29.292 08:29:02 -- target/invalid.sh@25 -- # printf %x 36 00:32:29.292 08:29:02 -- target/invalid.sh@25 -- # echo -e '\x24' 00:32:29.292 08:29:02 -- target/invalid.sh@25 -- # string+='$' 00:32:29.292 08:29:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:29.292 08:29:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:29.292 08:29:02 -- target/invalid.sh@25 -- # printf %x 67 00:32:29.292 08:29:02 -- target/invalid.sh@25 -- # echo -e '\x43' 00:32:29.292 08:29:02 -- target/invalid.sh@25 -- # string+=C 00:32:29.292 08:29:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:29.292 08:29:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:29.292 08:29:02 -- target/invalid.sh@25 -- # printf %x 71 00:32:29.292 08:29:02 -- target/invalid.sh@25 -- # echo -e '\x47' 00:32:29.292 08:29:02 -- target/invalid.sh@25 -- # string+=G 00:32:29.292 08:29:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:29.292 08:29:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:29.292 08:29:02 -- target/invalid.sh@25 -- # printf %x 36 00:32:29.292 08:29:02 -- target/invalid.sh@25 -- # echo -e '\x24' 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # string+='$' 00:32:29.552 08:29:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:29.552 08:29:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # printf %x 62 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # string+='>' 00:32:29.552 08:29:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:29.552 08:29:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # printf %x 42 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # string+='*' 00:32:29.552 08:29:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:29.552 08:29:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # printf %x 36 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # echo -e '\x24' 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # string+='$' 00:32:29.552 08:29:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:29.552 08:29:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # printf %x 50 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # echo -e '\x32' 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # string+=2 00:32:29.552 08:29:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:29.552 08:29:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # printf %x 92 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # string+='\' 00:32:29.552 08:29:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:29.552 08:29:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # printf %x 42 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # string+='*' 00:32:29.552 08:29:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:29.552 08:29:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # printf %x 71 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # echo -e '\x47' 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # string+=G 00:32:29.552 08:29:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:29.552 08:29:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # printf %x 54 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # echo -e '\x36' 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # string+=6 00:32:29.552 08:29:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:29.552 08:29:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # printf %x 73 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # echo -e '\x49' 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # string+=I 00:32:29.552 08:29:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:29.552 08:29:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # printf %x 64 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # echo -e '\x40' 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # string+=@ 00:32:29.552 08:29:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:29.552 08:29:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # printf %x 69 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # echo -e '\x45' 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # string+=E 00:32:29.552 08:29:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:29.552 08:29:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # printf %x 90 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # string+=Z 00:32:29.552 08:29:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:29.552 08:29:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # printf %x 119 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # echo -e '\x77' 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # string+=w 00:32:29.552 08:29:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:29.552 08:29:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # printf %x 103 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # echo -e '\x67' 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # string+=g 00:32:29.552 08:29:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:29.552 08:29:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # printf %x 109 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:32:29.552 08:29:02 -- target/invalid.sh@25 -- # string+=m 00:32:29.552 08:29:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:29.552 08:29:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:29.552 08:29:02 -- target/invalid.sh@28 -- # [[ N == \- ]] 00:32:29.552 08:29:02 -- target/invalid.sh@31 -- # echo 'N?$CG$>*$2\*G6I@EZwgm' 00:32:29.552 08:29:02 -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'N?$CG$>*$2\*G6I@EZwgm' nqn.2016-06.io.spdk:cnode27497 00:32:29.812 [2024-04-17 08:29:02.963462] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27497: invalid serial number 'N?$CG$>*$2\*G6I@EZwgm' 00:32:29.812 08:29:02 -- target/invalid.sh@54 -- # out='2024/04/17 08:29:02 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode27497 serial_number:N?$CG$>*$2\*G6I@EZwgm], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN N?$CG$>*$2\*G6I@EZwgm 00:32:29.812 request: 00:32:29.812 { 00:32:29.812 "method": "nvmf_create_subsystem", 00:32:29.812 "params": { 00:32:29.812 "nqn": "nqn.2016-06.io.spdk:cnode27497", 00:32:29.812 "serial_number": "N?$CG$>*$2\\*G6I@EZwgm" 00:32:29.812 } 00:32:29.812 } 00:32:29.812 Got JSON-RPC error response 00:32:29.812 GoRPCClient: error on JSON-RPC call' 00:32:29.812 08:29:02 -- target/invalid.sh@55 -- # [[ 2024/04/17 08:29:02 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode27497 serial_number:N?$CG$>*$2\*G6I@EZwgm], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN N?$CG$>*$2\*G6I@EZwgm 00:32:29.812 request: 00:32:29.812 { 00:32:29.812 "method": "nvmf_create_subsystem", 00:32:29.812 "params": { 00:32:29.812 "nqn": "nqn.2016-06.io.spdk:cnode27497", 00:32:29.812 "serial_number": "N?$CG$>*$2\\*G6I@EZwgm" 00:32:29.812 } 00:32:29.812 } 00:32:29.812 Got JSON-RPC error response 00:32:29.812 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:32:29.812 08:29:03 -- target/invalid.sh@58 -- # gen_random_s 41 00:32:29.812 08:29:03 -- target/invalid.sh@19 -- # local length=41 ll 00:32:29.813 08:29:03 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:32:29.813 08:29:03 -- target/invalid.sh@21 -- # local chars 00:32:29.813 08:29:03 -- target/invalid.sh@22 -- # local string 00:32:29.813 08:29:03 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:32:29.813 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # printf %x 87 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x57' 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # string+=W 00:32:29.813 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:29.813 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # printf %x 78 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # string+=N 00:32:29.813 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:29.813 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # printf %x 69 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x45' 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # string+=E 00:32:29.813 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:29.813 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # printf %x 74 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # string+=J 00:32:29.813 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:29.813 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # printf %x 67 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x43' 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # string+=C 00:32:29.813 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:29.813 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # printf %x 47 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # string+=/ 00:32:29.813 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:29.813 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # printf %x 65 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x41' 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # string+=A 00:32:29.813 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:29.813 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # printf %x 115 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x73' 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # string+=s 00:32:29.813 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:29.813 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # printf %x 91 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # string+='[' 00:32:29.813 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:29.813 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # printf %x 126 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # string+='~' 00:32:29.813 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:29.813 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # printf %x 119 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x77' 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # string+=w 00:32:29.813 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:29.813 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # printf %x 74 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # string+=J 00:32:29.813 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:29.813 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # printf %x 116 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x74' 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # string+=t 00:32:29.813 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:29.813 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # printf %x 77 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # string+=M 00:32:29.813 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:29.813 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # printf %x 124 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # string+='|' 00:32:29.813 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:29.813 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # printf %x 69 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x45' 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # string+=E 00:32:29.813 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:29.813 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # printf %x 60 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # string+='<' 00:32:29.813 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:29.813 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # printf %x 124 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:32:29.813 08:29:03 -- target/invalid.sh@25 -- # string+='|' 00:32:29.813 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:29.813 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # printf %x 58 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # string+=: 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # printf %x 118 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x76' 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # string+=v 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # printf %x 112 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x70' 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # string+=p 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # printf %x 61 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # string+== 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # printf %x 33 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x21' 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # string+='!' 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # printf %x 59 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # string+=';' 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # printf %x 72 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x48' 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # string+=H 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # printf %x 98 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x62' 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # string+=b 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # printf %x 102 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x66' 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # string+=f 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # printf %x 69 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x45' 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # string+=E 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # printf %x 126 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # string+='~' 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # printf %x 44 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # string+=, 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # printf %x 37 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x25' 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # string+=% 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # printf %x 74 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # string+=J 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # printf %x 81 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x51' 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # string+=Q 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # printf %x 70 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x46' 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # string+=F 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # printf %x 115 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x73' 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # string+=s 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # printf %x 44 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # string+=, 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # printf %x 84 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x54' 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # string+=T 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # printf %x 54 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x36' 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # string+=6 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # printf %x 63 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # string+='?' 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # printf %x 37 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x25' 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # string+=% 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # printf %x 79 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:32:30.073 08:29:03 -- target/invalid.sh@25 -- # string+=O 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll++ )) 00:32:30.073 08:29:03 -- target/invalid.sh@24 -- # (( ll < length )) 00:32:30.073 08:29:03 -- target/invalid.sh@28 -- # [[ W == \- ]] 00:32:30.073 08:29:03 -- target/invalid.sh@31 -- # echo 'WNEJC/As[~wJtM|E<|:vp=!;HbfE~,%JQFs,T6?%O' 00:32:30.073 08:29:03 -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'WNEJC/As[~wJtM|E<|:vp=!;HbfE~,%JQFs,T6?%O' nqn.2016-06.io.spdk:cnode23022 00:32:30.332 [2024-04-17 08:29:03.518934] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23022: invalid model number 'WNEJC/As[~wJtM|E<|:vp=!;HbfE~,%JQFs,T6?%O' 00:32:30.332 08:29:03 -- target/invalid.sh@58 -- # out='2024/04/17 08:29:03 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:WNEJC/As[~wJtM|E<|:vp=!;HbfE~,%JQFs,T6?%O nqn:nqn.2016-06.io.spdk:cnode23022], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN WNEJC/As[~wJtM|E<|:vp=!;HbfE~,%JQFs,T6?%O 00:32:30.332 request: 00:32:30.332 { 00:32:30.332 "method": "nvmf_create_subsystem", 00:32:30.332 "params": { 00:32:30.332 "nqn": "nqn.2016-06.io.spdk:cnode23022", 00:32:30.332 "model_number": "WNEJC/As[~wJtM|E<|:vp=!;HbfE~,%JQFs,T6?%O" 00:32:30.332 } 00:32:30.332 } 00:32:30.332 Got JSON-RPC error response 00:32:30.332 GoRPCClient: error on JSON-RPC call' 00:32:30.332 08:29:03 -- target/invalid.sh@59 -- # [[ 2024/04/17 08:29:03 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:WNEJC/As[~wJtM|E<|:vp=!;HbfE~,%JQFs,T6?%O nqn:nqn.2016-06.io.spdk:cnode23022], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN WNEJC/As[~wJtM|E<|:vp=!;HbfE~,%JQFs,T6?%O 00:32:30.332 request: 00:32:30.332 { 00:32:30.332 "method": "nvmf_create_subsystem", 00:32:30.332 "params": { 00:32:30.332 "nqn": "nqn.2016-06.io.spdk:cnode23022", 00:32:30.332 "model_number": "WNEJC/As[~wJtM|E<|:vp=!;HbfE~,%JQFs,T6?%O" 00:32:30.332 } 00:32:30.332 } 00:32:30.332 Got JSON-RPC error response 00:32:30.332 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:32:30.332 08:29:03 -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:32:30.591 [2024-04-17 08:29:03.718821] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:30.591 08:29:03 -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:32:30.849 08:29:03 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:32:30.849 08:29:03 -- target/invalid.sh@67 -- # head -n 1 00:32:30.849 08:29:03 -- target/invalid.sh@67 -- # echo '' 00:32:30.849 08:29:03 -- target/invalid.sh@67 -- # IP= 00:32:30.849 08:29:03 -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:32:31.118 [2024-04-17 08:29:04.209258] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:32:31.118 08:29:04 -- target/invalid.sh@69 -- # out='2024/04/17 08:29:04 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:32:31.118 request: 00:32:31.118 { 00:32:31.118 "method": "nvmf_subsystem_remove_listener", 00:32:31.118 "params": { 00:32:31.118 "nqn": "nqn.2016-06.io.spdk:cnode", 00:32:31.118 "listen_address": { 00:32:31.118 "trtype": "tcp", 00:32:31.118 "traddr": "", 00:32:31.119 "trsvcid": "4421" 00:32:31.119 } 00:32:31.119 } 00:32:31.119 } 00:32:31.119 Got JSON-RPC error response 00:32:31.119 GoRPCClient: error on JSON-RPC call' 00:32:31.119 08:29:04 -- target/invalid.sh@70 -- # [[ 2024/04/17 08:29:04 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:32:31.119 request: 00:32:31.119 { 00:32:31.119 "method": "nvmf_subsystem_remove_listener", 00:32:31.119 "params": { 00:32:31.119 "nqn": "nqn.2016-06.io.spdk:cnode", 00:32:31.119 "listen_address": { 00:32:31.119 "trtype": "tcp", 00:32:31.119 "traddr": "", 00:32:31.119 "trsvcid": "4421" 00:32:31.119 } 00:32:31.119 } 00:32:31.119 } 00:32:31.119 Got JSON-RPC error response 00:32:31.119 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:32:31.119 08:29:04 -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8791 -i 0 00:32:31.379 [2024-04-17 08:29:04.465006] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8791: invalid cntlid range [0-65519] 00:32:31.379 08:29:04 -- target/invalid.sh@73 -- # out='2024/04/17 08:29:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode8791], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:32:31.379 request: 00:32:31.379 { 00:32:31.379 "method": "nvmf_create_subsystem", 00:32:31.379 "params": { 00:32:31.379 "nqn": "nqn.2016-06.io.spdk:cnode8791", 00:32:31.379 "min_cntlid": 0 00:32:31.379 } 00:32:31.379 } 00:32:31.379 Got JSON-RPC error response 00:32:31.379 GoRPCClient: error on JSON-RPC call' 00:32:31.379 08:29:04 -- target/invalid.sh@74 -- # [[ 2024/04/17 08:29:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode8791], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:32:31.379 request: 00:32:31.379 { 00:32:31.379 "method": "nvmf_create_subsystem", 00:32:31.379 "params": { 00:32:31.379 "nqn": "nqn.2016-06.io.spdk:cnode8791", 00:32:31.379 "min_cntlid": 0 00:32:31.379 } 00:32:31.379 } 00:32:31.379 Got JSON-RPC error response 00:32:31.379 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:32:31.379 08:29:04 -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15141 -i 65520 00:32:31.379 [2024-04-17 08:29:04.688759] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15141: invalid cntlid range [65520-65519] 00:32:31.638 08:29:04 -- target/invalid.sh@75 -- # out='2024/04/17 08:29:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode15141], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:32:31.638 request: 00:32:31.638 { 00:32:31.638 "method": "nvmf_create_subsystem", 00:32:31.638 "params": { 00:32:31.638 "nqn": "nqn.2016-06.io.spdk:cnode15141", 00:32:31.638 "min_cntlid": 65520 00:32:31.638 } 00:32:31.638 } 00:32:31.638 Got JSON-RPC error response 00:32:31.638 GoRPCClient: error on JSON-RPC call' 00:32:31.638 08:29:04 -- target/invalid.sh@76 -- # [[ 2024/04/17 08:29:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode15141], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:32:31.638 request: 00:32:31.638 { 00:32:31.638 "method": "nvmf_create_subsystem", 00:32:31.638 "params": { 00:32:31.638 "nqn": "nqn.2016-06.io.spdk:cnode15141", 00:32:31.638 "min_cntlid": 65520 00:32:31.638 } 00:32:31.638 } 00:32:31.638 Got JSON-RPC error response 00:32:31.638 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:32:31.638 08:29:04 -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31271 -I 0 00:32:31.638 [2024-04-17 08:29:04.908795] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31271: invalid cntlid range [1-0] 00:32:31.638 08:29:04 -- target/invalid.sh@77 -- # out='2024/04/17 08:29:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode31271], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:32:31.638 request: 00:32:31.638 { 00:32:31.638 "method": "nvmf_create_subsystem", 00:32:31.638 "params": { 00:32:31.638 "nqn": "nqn.2016-06.io.spdk:cnode31271", 00:32:31.638 "max_cntlid": 0 00:32:31.638 } 00:32:31.638 } 00:32:31.638 Got JSON-RPC error response 00:32:31.638 GoRPCClient: error on JSON-RPC call' 00:32:31.638 08:29:04 -- target/invalid.sh@78 -- # [[ 2024/04/17 08:29:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode31271], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:32:31.638 request: 00:32:31.638 { 00:32:31.638 "method": "nvmf_create_subsystem", 00:32:31.638 "params": { 00:32:31.638 "nqn": "nqn.2016-06.io.spdk:cnode31271", 00:32:31.638 "max_cntlid": 0 00:32:31.638 } 00:32:31.638 } 00:32:31.638 Got JSON-RPC error response 00:32:31.638 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:32:31.639 08:29:04 -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23295 -I 65520 00:32:31.901 [2024-04-17 08:29:05.112597] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23295: invalid cntlid range [1-65520] 00:32:31.901 08:29:05 -- target/invalid.sh@79 -- # out='2024/04/17 08:29:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode23295], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:32:31.901 request: 00:32:31.901 { 00:32:31.901 "method": "nvmf_create_subsystem", 00:32:31.901 "params": { 00:32:31.901 "nqn": "nqn.2016-06.io.spdk:cnode23295", 00:32:31.901 "max_cntlid": 65520 00:32:31.901 } 00:32:31.901 } 00:32:31.901 Got JSON-RPC error response 00:32:31.901 GoRPCClient: error on JSON-RPC call' 00:32:31.901 08:29:05 -- target/invalid.sh@80 -- # [[ 2024/04/17 08:29:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode23295], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:32:31.901 request: 00:32:31.901 { 00:32:31.901 "method": "nvmf_create_subsystem", 00:32:31.901 "params": { 00:32:31.901 "nqn": "nqn.2016-06.io.spdk:cnode23295", 00:32:31.901 "max_cntlid": 65520 00:32:31.901 } 00:32:31.901 } 00:32:31.901 Got JSON-RPC error response 00:32:31.901 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:32:31.901 08:29:05 -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19521 -i 6 -I 5 00:32:32.161 [2024-04-17 08:29:05.316451] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19521: invalid cntlid range [6-5] 00:32:32.161 08:29:05 -- target/invalid.sh@83 -- # out='2024/04/17 08:29:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode19521], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:32:32.161 request: 00:32:32.161 { 00:32:32.161 "method": "nvmf_create_subsystem", 00:32:32.161 "params": { 00:32:32.161 "nqn": "nqn.2016-06.io.spdk:cnode19521", 00:32:32.161 "min_cntlid": 6, 00:32:32.161 "max_cntlid": 5 00:32:32.161 } 00:32:32.161 } 00:32:32.161 Got JSON-RPC error response 00:32:32.161 GoRPCClient: error on JSON-RPC call' 00:32:32.161 08:29:05 -- target/invalid.sh@84 -- # [[ 2024/04/17 08:29:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode19521], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:32:32.161 request: 00:32:32.161 { 00:32:32.161 "method": "nvmf_create_subsystem", 00:32:32.161 "params": { 00:32:32.161 "nqn": "nqn.2016-06.io.spdk:cnode19521", 00:32:32.161 "min_cntlid": 6, 00:32:32.161 "max_cntlid": 5 00:32:32.161 } 00:32:32.161 } 00:32:32.161 Got JSON-RPC error response 00:32:32.161 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:32:32.161 08:29:05 -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:32:32.161 08:29:05 -- target/invalid.sh@87 -- # out='request: 00:32:32.161 { 00:32:32.161 "name": "foobar", 00:32:32.161 "method": "nvmf_delete_target", 00:32:32.161 "req_id": 1 00:32:32.161 } 00:32:32.161 Got JSON-RPC error response 00:32:32.161 response: 00:32:32.161 { 00:32:32.161 "code": -32602, 00:32:32.161 "message": "The specified target doesn'\''t exist, cannot delete it." 00:32:32.161 }' 00:32:32.161 08:29:05 -- target/invalid.sh@88 -- # [[ request: 00:32:32.161 { 00:32:32.161 "name": "foobar", 00:32:32.161 "method": "nvmf_delete_target", 00:32:32.161 "req_id": 1 00:32:32.161 } 00:32:32.161 Got JSON-RPC error response 00:32:32.161 response: 00:32:32.161 { 00:32:32.161 "code": -32602, 00:32:32.161 "message": "The specified target doesn't exist, cannot delete it." 00:32:32.161 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:32:32.161 08:29:05 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:32:32.161 08:29:05 -- target/invalid.sh@91 -- # nvmftestfini 00:32:32.161 08:29:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:32.161 08:29:05 -- nvmf/common.sh@116 -- # sync 00:32:32.421 08:29:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:32.421 08:29:05 -- nvmf/common.sh@119 -- # set +e 00:32:32.421 08:29:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:32.421 08:29:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:32.421 rmmod nvme_tcp 00:32:32.421 rmmod nvme_fabrics 00:32:32.421 rmmod nvme_keyring 00:32:32.421 08:29:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:32.421 08:29:05 -- nvmf/common.sh@123 -- # set -e 00:32:32.421 08:29:05 -- nvmf/common.sh@124 -- # return 0 00:32:32.421 08:29:05 -- nvmf/common.sh@477 -- # '[' -n 66403 ']' 00:32:32.421 08:29:05 -- nvmf/common.sh@478 -- # killprocess 66403 00:32:32.421 08:29:05 -- common/autotest_common.sh@926 -- # '[' -z 66403 ']' 00:32:32.421 08:29:05 -- common/autotest_common.sh@930 -- # kill -0 66403 00:32:32.421 08:29:05 -- common/autotest_common.sh@931 -- # uname 00:32:32.421 08:29:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:32.421 08:29:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66403 00:32:32.421 killing process with pid 66403 00:32:32.421 08:29:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:32.421 08:29:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:32.421 08:29:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66403' 00:32:32.421 08:29:05 -- common/autotest_common.sh@945 -- # kill 66403 00:32:32.421 08:29:05 -- common/autotest_common.sh@950 -- # wait 66403 00:32:32.680 08:29:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:32.680 08:29:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:32.680 08:29:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:32.680 08:29:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:32.680 08:29:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:32.681 08:29:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:32.681 08:29:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:32.681 08:29:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:32.681 08:29:05 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:32:32.681 00:32:32.681 real 0m5.576s 00:32:32.681 user 0m21.936s 00:32:32.681 sys 0m1.301s 00:32:32.681 08:29:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:32.681 08:29:05 -- common/autotest_common.sh@10 -- # set +x 00:32:32.681 ************************************ 00:32:32.681 END TEST nvmf_invalid 00:32:32.681 ************************************ 00:32:32.681 08:29:05 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:32:32.681 08:29:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:32.681 08:29:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:32.681 08:29:05 -- common/autotest_common.sh@10 -- # set +x 00:32:32.681 ************************************ 00:32:32.681 START TEST nvmf_abort 00:32:32.681 ************************************ 00:32:32.681 08:29:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:32:32.940 * Looking for test storage... 00:32:32.940 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:32:32.940 08:29:06 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:32.940 08:29:06 -- nvmf/common.sh@7 -- # uname -s 00:32:32.940 08:29:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:32.940 08:29:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:32.940 08:29:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:32.940 08:29:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:32.940 08:29:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:32.940 08:29:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:32.940 08:29:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:32.940 08:29:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:32.940 08:29:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:32.940 08:29:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:32.940 08:29:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:32:32.940 08:29:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:32:32.940 08:29:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:32.940 08:29:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:32.940 08:29:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:32.940 08:29:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:32.940 08:29:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:32.940 08:29:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:32.940 08:29:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:32.940 08:29:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.940 08:29:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.940 08:29:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.940 08:29:06 -- paths/export.sh@5 -- # export PATH 00:32:32.940 08:29:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.940 08:29:06 -- nvmf/common.sh@46 -- # : 0 00:32:32.940 08:29:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:32.940 08:29:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:32.940 08:29:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:32.940 08:29:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:32.940 08:29:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:32.940 08:29:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:32.940 08:29:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:32.940 08:29:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:32.940 08:29:06 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:32.940 08:29:06 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:32:32.940 08:29:06 -- target/abort.sh@14 -- # nvmftestinit 00:32:32.940 08:29:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:32.940 08:29:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:32.940 08:29:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:32.940 08:29:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:32.940 08:29:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:32.941 08:29:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:32.941 08:29:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:32.941 08:29:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:32.941 08:29:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:32:32.941 08:29:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:32:32.941 08:29:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:32:32.941 08:29:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:32:32.941 08:29:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:32:32.941 08:29:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:32:32.941 08:29:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:32.941 08:29:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:32.941 08:29:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:32:32.941 08:29:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:32:32.941 08:29:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:32.941 08:29:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:32.941 08:29:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:32.941 08:29:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:32.941 08:29:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:32.941 08:29:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:32.941 08:29:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:32.941 08:29:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:32.941 08:29:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:32:32.941 08:29:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:32:32.941 Cannot find device "nvmf_tgt_br" 00:32:32.941 08:29:06 -- nvmf/common.sh@154 -- # true 00:32:32.941 08:29:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:32:32.941 Cannot find device "nvmf_tgt_br2" 00:32:32.941 08:29:06 -- nvmf/common.sh@155 -- # true 00:32:32.941 08:29:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:32:32.941 08:29:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:32:32.941 Cannot find device "nvmf_tgt_br" 00:32:32.941 08:29:06 -- nvmf/common.sh@157 -- # true 00:32:32.941 08:29:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:32:32.941 Cannot find device "nvmf_tgt_br2" 00:32:32.941 08:29:06 -- nvmf/common.sh@158 -- # true 00:32:32.941 08:29:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:32:32.941 08:29:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:32:32.941 08:29:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:32.941 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:32.941 08:29:06 -- nvmf/common.sh@161 -- # true 00:32:32.941 08:29:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:32.941 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:32.941 08:29:06 -- nvmf/common.sh@162 -- # true 00:32:32.941 08:29:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:32:32.941 08:29:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:33.199 08:29:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:33.199 08:29:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:33.199 08:29:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:33.199 08:29:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:33.199 08:29:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:33.199 08:29:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:32:33.199 08:29:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:32:33.199 08:29:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:32:33.199 08:29:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:32:33.199 08:29:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:32:33.199 08:29:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:32:33.199 08:29:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:33.199 08:29:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:33.199 08:29:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:33.199 08:29:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:32:33.199 08:29:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:32:33.199 08:29:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:32:33.199 08:29:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:33.199 08:29:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:33.199 08:29:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:33.199 08:29:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:33.199 08:29:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:32:33.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:33.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:32:33.199 00:32:33.199 --- 10.0.0.2 ping statistics --- 00:32:33.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:33.199 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:32:33.199 08:29:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:32:33.199 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:33.199 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.130 ms 00:32:33.199 00:32:33.199 --- 10.0.0.3 ping statistics --- 00:32:33.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:33.199 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:32:33.199 08:29:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:33.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:33.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:32:33.199 00:32:33.199 --- 10.0.0.1 ping statistics --- 00:32:33.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:33.199 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:32:33.200 08:29:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:33.200 08:29:06 -- nvmf/common.sh@421 -- # return 0 00:32:33.200 08:29:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:33.200 08:29:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:33.200 08:29:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:33.200 08:29:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:33.200 08:29:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:33.200 08:29:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:33.200 08:29:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:33.200 08:29:06 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:32:33.200 08:29:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:33.200 08:29:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:33.200 08:29:06 -- common/autotest_common.sh@10 -- # set +x 00:32:33.200 08:29:06 -- nvmf/common.sh@469 -- # nvmfpid=66915 00:32:33.200 08:29:06 -- nvmf/common.sh@470 -- # waitforlisten 66915 00:32:33.200 08:29:06 -- common/autotest_common.sh@819 -- # '[' -z 66915 ']' 00:32:33.200 08:29:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:33.200 08:29:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:33.200 08:29:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:33.200 08:29:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:33.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:33.200 08:29:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:33.200 08:29:06 -- common/autotest_common.sh@10 -- # set +x 00:32:33.458 [2024-04-17 08:29:06.536096] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:32:33.458 [2024-04-17 08:29:06.536532] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:33.458 [2024-04-17 08:29:06.666692] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:33.458 [2024-04-17 08:29:06.770325] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:33.458 [2024-04-17 08:29:06.770483] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:33.458 [2024-04-17 08:29:06.770492] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:33.458 [2024-04-17 08:29:06.770499] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:33.458 [2024-04-17 08:29:06.770617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:33.458 [2024-04-17 08:29:06.770742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:33.458 [2024-04-17 08:29:06.770775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:34.396 08:29:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:34.396 08:29:07 -- common/autotest_common.sh@852 -- # return 0 00:32:34.396 08:29:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:34.396 08:29:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:34.396 08:29:07 -- common/autotest_common.sh@10 -- # set +x 00:32:34.396 08:29:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:34.396 08:29:07 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:32:34.396 08:29:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:34.396 08:29:07 -- common/autotest_common.sh@10 -- # set +x 00:32:34.396 [2024-04-17 08:29:07.532179] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:34.396 08:29:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:34.396 08:29:07 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:32:34.396 08:29:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:34.396 08:29:07 -- common/autotest_common.sh@10 -- # set +x 00:32:34.396 Malloc0 00:32:34.396 08:29:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:34.396 08:29:07 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:34.396 08:29:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:34.396 08:29:07 -- common/autotest_common.sh@10 -- # set +x 00:32:34.396 Delay0 00:32:34.396 08:29:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:34.396 08:29:07 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:34.396 08:29:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:34.396 08:29:07 -- common/autotest_common.sh@10 -- # set +x 00:32:34.396 08:29:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:34.396 08:29:07 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:32:34.396 08:29:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:34.396 08:29:07 -- common/autotest_common.sh@10 -- # set +x 00:32:34.396 08:29:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:34.396 08:29:07 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:34.396 08:29:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:34.396 08:29:07 -- common/autotest_common.sh@10 -- # set +x 00:32:34.396 [2024-04-17 08:29:07.609483] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:34.396 08:29:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:34.396 08:29:07 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:34.396 08:29:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:34.396 08:29:07 -- common/autotest_common.sh@10 -- # set +x 00:32:34.396 08:29:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:34.396 08:29:07 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:32:34.655 [2024-04-17 08:29:07.790743] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:32:36.583 Initializing NVMe Controllers 00:32:36.583 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:32:36.583 controller IO queue size 128 less than required 00:32:36.583 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:32:36.583 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:32:36.583 Initialization complete. Launching workers. 00:32:36.583 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 39778 00:32:36.583 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 39839, failed to submit 62 00:32:36.583 success 39778, unsuccess 61, failed 0 00:32:36.583 08:29:09 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:36.583 08:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:36.583 08:29:09 -- common/autotest_common.sh@10 -- # set +x 00:32:36.583 08:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:36.583 08:29:09 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:32:36.583 08:29:09 -- target/abort.sh@38 -- # nvmftestfini 00:32:36.583 08:29:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:36.583 08:29:09 -- nvmf/common.sh@116 -- # sync 00:32:36.583 08:29:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:36.583 08:29:09 -- nvmf/common.sh@119 -- # set +e 00:32:36.583 08:29:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:36.583 08:29:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:36.583 rmmod nvme_tcp 00:32:36.583 rmmod nvme_fabrics 00:32:36.583 rmmod nvme_keyring 00:32:36.843 08:29:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:36.843 08:29:09 -- nvmf/common.sh@123 -- # set -e 00:32:36.843 08:29:09 -- nvmf/common.sh@124 -- # return 0 00:32:36.843 08:29:09 -- nvmf/common.sh@477 -- # '[' -n 66915 ']' 00:32:36.843 08:29:09 -- nvmf/common.sh@478 -- # killprocess 66915 00:32:36.843 08:29:09 -- common/autotest_common.sh@926 -- # '[' -z 66915 ']' 00:32:36.843 08:29:09 -- common/autotest_common.sh@930 -- # kill -0 66915 00:32:36.843 08:29:09 -- common/autotest_common.sh@931 -- # uname 00:32:36.843 08:29:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:36.843 08:29:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66915 00:32:36.843 killing process with pid 66915 00:32:36.843 08:29:09 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:36.843 08:29:09 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:36.843 08:29:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66915' 00:32:36.843 08:29:09 -- common/autotest_common.sh@945 -- # kill 66915 00:32:36.843 08:29:09 -- common/autotest_common.sh@950 -- # wait 66915 00:32:37.102 08:29:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:37.102 08:29:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:37.102 08:29:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:37.102 08:29:10 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:37.102 08:29:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:37.102 08:29:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:37.102 08:29:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:37.102 08:29:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:37.102 08:29:10 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:32:37.102 00:32:37.102 real 0m4.326s 00:32:37.102 user 0m12.361s 00:32:37.102 sys 0m0.924s 00:32:37.102 08:29:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:37.102 08:29:10 -- common/autotest_common.sh@10 -- # set +x 00:32:37.102 ************************************ 00:32:37.102 END TEST nvmf_abort 00:32:37.102 ************************************ 00:32:37.102 08:29:10 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:32:37.102 08:29:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:37.102 08:29:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:37.102 08:29:10 -- common/autotest_common.sh@10 -- # set +x 00:32:37.102 ************************************ 00:32:37.102 START TEST nvmf_ns_hotplug_stress 00:32:37.102 ************************************ 00:32:37.102 08:29:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:32:37.362 * Looking for test storage... 00:32:37.362 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:32:37.362 08:29:10 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:37.362 08:29:10 -- nvmf/common.sh@7 -- # uname -s 00:32:37.362 08:29:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:37.362 08:29:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:37.362 08:29:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:37.362 08:29:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:37.362 08:29:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:37.362 08:29:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:37.362 08:29:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:37.362 08:29:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:37.362 08:29:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:37.362 08:29:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:37.362 08:29:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:32:37.362 08:29:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:32:37.362 08:29:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:37.362 08:29:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:37.362 08:29:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:37.362 08:29:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:37.362 08:29:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:37.362 08:29:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:37.362 08:29:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:37.362 08:29:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:37.362 08:29:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:37.362 08:29:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:37.362 08:29:10 -- paths/export.sh@5 -- # export PATH 00:32:37.362 08:29:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:37.362 08:29:10 -- nvmf/common.sh@46 -- # : 0 00:32:37.362 08:29:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:37.362 08:29:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:37.362 08:29:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:37.362 08:29:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:37.362 08:29:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:37.362 08:29:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:37.362 08:29:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:37.362 08:29:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:37.362 08:29:10 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:37.362 08:29:10 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:32:37.362 08:29:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:37.362 08:29:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:37.362 08:29:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:37.362 08:29:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:37.362 08:29:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:37.362 08:29:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:37.362 08:29:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:37.362 08:29:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:37.362 08:29:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:32:37.362 08:29:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:32:37.362 08:29:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:32:37.362 08:29:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:32:37.362 08:29:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:32:37.362 08:29:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:32:37.362 08:29:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:37.362 08:29:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:37.362 08:29:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:32:37.362 08:29:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:32:37.362 08:29:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:37.362 08:29:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:37.362 08:29:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:37.362 08:29:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:37.362 08:29:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:37.362 08:29:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:37.362 08:29:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:37.362 08:29:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:37.362 08:29:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:32:37.362 08:29:10 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:32:37.362 Cannot find device "nvmf_tgt_br" 00:32:37.362 08:29:10 -- nvmf/common.sh@154 -- # true 00:32:37.362 08:29:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:32:37.362 Cannot find device "nvmf_tgt_br2" 00:32:37.362 08:29:10 -- nvmf/common.sh@155 -- # true 00:32:37.362 08:29:10 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:32:37.362 08:29:10 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:32:37.362 Cannot find device "nvmf_tgt_br" 00:32:37.362 08:29:10 -- nvmf/common.sh@157 -- # true 00:32:37.362 08:29:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:32:37.362 Cannot find device "nvmf_tgt_br2" 00:32:37.362 08:29:10 -- nvmf/common.sh@158 -- # true 00:32:37.362 08:29:10 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:32:37.362 08:29:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:32:37.362 08:29:10 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:37.362 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:37.362 08:29:10 -- nvmf/common.sh@161 -- # true 00:32:37.362 08:29:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:37.362 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:37.362 08:29:10 -- nvmf/common.sh@162 -- # true 00:32:37.362 08:29:10 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:32:37.362 08:29:10 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:37.362 08:29:10 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:37.621 08:29:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:37.621 08:29:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:37.621 08:29:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:37.621 08:29:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:37.621 08:29:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:32:37.621 08:29:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:32:37.621 08:29:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:32:37.621 08:29:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:32:37.621 08:29:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:32:37.621 08:29:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:32:37.621 08:29:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:37.621 08:29:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:37.621 08:29:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:37.621 08:29:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:32:37.621 08:29:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:32:37.621 08:29:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:32:37.621 08:29:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:37.621 08:29:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:37.621 08:29:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:37.621 08:29:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:37.621 08:29:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:32:37.621 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:37.621 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:32:37.621 00:32:37.621 --- 10.0.0.2 ping statistics --- 00:32:37.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:37.621 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:32:37.622 08:29:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:32:37.622 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:37.622 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:32:37.622 00:32:37.622 --- 10.0.0.3 ping statistics --- 00:32:37.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:37.622 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:32:37.622 08:29:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:37.622 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:37.622 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:32:37.622 00:32:37.622 --- 10.0.0.1 ping statistics --- 00:32:37.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:37.622 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:32:37.622 08:29:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:37.622 08:29:10 -- nvmf/common.sh@421 -- # return 0 00:32:37.622 08:29:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:37.622 08:29:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:37.622 08:29:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:37.622 08:29:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:37.622 08:29:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:37.622 08:29:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:37.622 08:29:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:37.622 08:29:10 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:32:37.622 08:29:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:37.622 08:29:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:37.622 08:29:10 -- common/autotest_common.sh@10 -- # set +x 00:32:37.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:37.622 08:29:10 -- nvmf/common.sh@469 -- # nvmfpid=67180 00:32:37.622 08:29:10 -- nvmf/common.sh@470 -- # waitforlisten 67180 00:32:37.622 08:29:10 -- common/autotest_common.sh@819 -- # '[' -z 67180 ']' 00:32:37.622 08:29:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:37.622 08:29:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:37.622 08:29:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:37.622 08:29:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:37.622 08:29:10 -- common/autotest_common.sh@10 -- # set +x 00:32:37.622 08:29:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:37.880 [2024-04-17 08:29:10.960004] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:32:37.880 [2024-04-17 08:29:10.960107] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:37.880 [2024-04-17 08:29:11.104725] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:38.139 [2024-04-17 08:29:11.214291] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:38.139 [2024-04-17 08:29:11.214482] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:38.139 [2024-04-17 08:29:11.214497] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:38.139 [2024-04-17 08:29:11.214504] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:38.139 [2024-04-17 08:29:11.214668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:38.139 [2024-04-17 08:29:11.214923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:38.139 [2024-04-17 08:29:11.214923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:38.708 08:29:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:38.708 08:29:11 -- common/autotest_common.sh@852 -- # return 0 00:32:38.708 08:29:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:38.708 08:29:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:38.708 08:29:11 -- common/autotest_common.sh@10 -- # set +x 00:32:38.708 08:29:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:38.708 08:29:11 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:32:38.708 08:29:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:38.990 [2024-04-17 08:29:12.109387] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:38.990 08:29:12 -- target/ns_hotplug_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:39.255 08:29:12 -- target/ns_hotplug_stress.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:39.255 [2024-04-17 08:29:12.570128] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:39.514 08:29:12 -- target/ns_hotplug_stress.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:39.514 08:29:12 -- target/ns_hotplug_stress.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:32:39.773 Malloc0 00:32:39.773 08:29:12 -- target/ns_hotplug_stress.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:40.032 Delay0 00:32:40.032 08:29:13 -- target/ns_hotplug_stress.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:40.291 08:29:13 -- target/ns_hotplug_stress.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:32:40.551 NULL1 00:32:40.551 08:29:13 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:32:40.551 08:29:13 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=67308 00:32:40.551 08:29:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67308 00:32:40.551 08:29:13 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:40.551 08:29:13 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:32:41.926 Read completed with error (sct=0, sc=11) 00:32:41.927 08:29:15 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:41.927 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:41.927 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:41.927 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:41.927 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:42.185 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:42.185 08:29:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:32:42.185 08:29:15 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:32:42.443 true 00:32:42.443 08:29:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67308 00:32:42.443 08:29:15 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:43.012 08:29:16 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:43.271 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:43.271 08:29:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:32:43.271 08:29:16 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:32:43.532 true 00:32:43.532 08:29:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67308 00:32:43.532 08:29:16 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:43.797 08:29:16 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:44.065 08:29:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:32:44.065 08:29:17 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:32:44.324 true 00:32:44.324 08:29:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67308 00:32:44.324 08:29:17 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:45.261 08:29:18 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:45.519 08:29:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:32:45.519 08:29:18 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:32:45.519 true 00:32:45.778 08:29:18 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67308 00:32:45.778 08:29:18 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:45.778 08:29:19 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:46.035 08:29:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:32:46.035 08:29:19 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:32:46.293 true 00:32:46.552 08:29:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67308 00:32:46.552 08:29:19 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:46.552 08:29:19 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:46.811 08:29:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:32:46.811 08:29:20 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:32:47.089 true 00:32:47.089 08:29:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67308 00:32:47.089 08:29:20 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:48.471 08:29:21 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:48.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:48.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:48.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:48.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:48.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:48.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:48.471 08:29:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:32:48.471 08:29:21 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:32:48.471 true 00:32:48.729 08:29:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67308 00:32:48.729 08:29:21 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:49.294 08:29:22 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:49.553 08:29:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:32:49.553 08:29:22 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:32:49.812 true 00:32:49.812 08:29:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67308 00:32:49.812 08:29:23 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:50.070 08:29:23 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:50.339 08:29:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:32:50.339 08:29:23 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:32:50.597 true 00:32:50.597 08:29:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67308 00:32:50.597 08:29:23 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:51.534 08:29:24 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:51.534 08:29:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:32:51.534 08:29:24 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:32:51.793 true 00:32:51.793 08:29:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67308 00:32:51.793 08:29:25 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:52.053 08:29:25 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:52.312 08:29:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:32:52.312 08:29:25 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:32:52.312 true 00:32:52.312 08:29:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67308 00:32:52.312 08:29:25 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:53.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:53.688 08:29:26 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:53.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:53.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:53.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:53.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:53.688 08:29:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:32:53.688 08:29:26 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:32:53.947 true 00:32:53.947 08:29:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67308 00:32:53.947 08:29:27 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:54.883 08:29:27 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:54.883 08:29:28 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:32:54.883 08:29:28 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:32:55.140 true 00:32:55.140 08:29:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67308 00:32:55.140 08:29:28 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:55.408 08:29:28 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:55.408 08:29:28 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:32:55.408 08:29:28 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:32:55.668 true 00:32:55.668 08:29:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67308 00:32:55.668 08:29:28 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:56.603 08:29:29 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:56.860 08:29:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:32:56.860 08:29:30 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:32:57.119 true 00:32:57.119 08:29:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67308 00:32:57.119 08:29:30 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:57.378 08:29:30 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:57.638 08:29:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:32:57.638 08:29:30 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:32:57.896 true 00:32:57.896 08:29:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67308 00:32:57.896 08:29:30 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:58.833 08:29:31 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:58.833 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:58.833 08:29:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:32:58.833 08:29:32 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:32:59.092 true 00:32:59.092 08:29:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67308 00:32:59.092 08:29:32 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:59.351 08:29:32 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:59.609 08:29:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:32:59.609 08:29:32 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:32:59.609 true 00:32:59.871 08:29:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67308 00:32:59.871 08:29:32 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:00.824 08:29:33 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:00.824 08:29:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:33:00.824 08:29:34 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:33:01.091 true 00:33:01.091 08:29:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67308 00:33:01.091 08:29:34 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:01.359 08:29:34 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:01.624 08:29:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:33:01.624 08:29:34 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:33:01.884 true 00:33:01.884 08:29:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67308 00:33:01.884 08:29:35 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:02.822 08:29:35 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:03.079 08:29:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:33:03.079 08:29:36 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:33:03.079 true 00:33:03.079 08:29:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67308 00:33:03.079 08:29:36 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:03.337 08:29:36 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:03.596 08:29:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:33:03.596 08:29:36 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:33:03.853 true 00:33:03.853 08:29:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67308 00:33:03.854 08:29:37 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:04.788 08:29:37 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:05.047 08:29:38 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:33:05.047 08:29:38 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:33:05.306 true 00:33:05.306 08:29:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67308 00:33:05.306 08:29:38 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:05.306 08:29:38 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:05.564 08:29:38 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:33:05.564 08:29:38 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:33:05.822 true 00:33:05.822 08:29:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67308 00:33:05.822 08:29:39 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:06.760 08:29:39 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:07.020 08:29:40 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:33:07.020 08:29:40 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:33:07.020 true 00:33:07.280 08:29:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67308 00:33:07.280 08:29:40 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:07.280 08:29:40 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:07.540 08:29:40 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:33:07.540 08:29:40 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:33:07.799 true 00:33:07.799 08:29:41 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67308 00:33:07.799 08:29:41 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:08.737 08:29:41 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:09.008 08:29:42 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:33:09.008 08:29:42 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:33:09.281 true 00:33:09.281 08:29:42 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67308 00:33:09.281 08:29:42 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:09.541 08:29:42 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:09.541 08:29:42 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:33:09.541 08:29:42 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:33:09.800 true 00:33:09.800 08:29:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67308 00:33:09.800 08:29:43 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:10.739 08:29:43 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:10.739 Initializing NVMe Controllers 00:33:10.740 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:10.740 Controller IO queue size 128, less than required. 00:33:10.740 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:10.740 Controller IO queue size 128, less than required. 00:33:10.740 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:10.740 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:10.740 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:33:10.740 Initialization complete. Launching workers. 00:33:10.740 ======================================================== 00:33:10.740 Latency(us) 00:33:10.740 Device Information : IOPS MiB/s Average min max 00:33:10.740 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 692.02 0.34 108678.34 3385.92 1170689.74 00:33:10.740 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 13470.14 6.58 9502.52 1625.41 533604.97 00:33:10.740 ======================================================== 00:33:10.740 Total : 14162.16 6.92 14348.65 1625.41 1170689.74 00:33:10.740 00:33:11.000 08:29:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:33:11.000 08:29:44 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:33:11.258 true 00:33:11.258 08:29:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 67308 00:33:11.258 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (67308) - No such process 00:33:11.258 08:29:44 -- target/ns_hotplug_stress.sh@44 -- # wait 67308 00:33:11.258 08:29:44 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:33:11.258 08:29:44 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:33:11.258 08:29:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:11.258 08:29:44 -- nvmf/common.sh@116 -- # sync 00:33:11.258 08:29:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:11.258 08:29:44 -- nvmf/common.sh@119 -- # set +e 00:33:11.258 08:29:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:11.258 08:29:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:11.258 rmmod nvme_tcp 00:33:11.258 rmmod nvme_fabrics 00:33:11.258 rmmod nvme_keyring 00:33:11.258 08:29:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:11.258 08:29:44 -- nvmf/common.sh@123 -- # set -e 00:33:11.258 08:29:44 -- nvmf/common.sh@124 -- # return 0 00:33:11.258 08:29:44 -- nvmf/common.sh@477 -- # '[' -n 67180 ']' 00:33:11.258 08:29:44 -- nvmf/common.sh@478 -- # killprocess 67180 00:33:11.258 08:29:44 -- common/autotest_common.sh@926 -- # '[' -z 67180 ']' 00:33:11.258 08:29:44 -- common/autotest_common.sh@930 -- # kill -0 67180 00:33:11.258 08:29:44 -- common/autotest_common.sh@931 -- # uname 00:33:11.258 08:29:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:11.258 08:29:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67180 00:33:11.258 killing process with pid 67180 00:33:11.258 08:29:44 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:33:11.258 08:29:44 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:33:11.258 08:29:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67180' 00:33:11.258 08:29:44 -- common/autotest_common.sh@945 -- # kill 67180 00:33:11.258 08:29:44 -- common/autotest_common.sh@950 -- # wait 67180 00:33:11.517 08:29:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:33:11.517 08:29:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:11.517 08:29:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:11.517 08:29:44 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:11.517 08:29:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:11.517 08:29:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:11.517 08:29:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:11.517 08:29:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:11.517 08:29:44 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:33:11.517 00:33:11.517 real 0m34.458s 00:33:11.517 user 2m25.644s 00:33:11.517 sys 0m6.465s 00:33:11.517 08:29:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:11.517 08:29:44 -- common/autotest_common.sh@10 -- # set +x 00:33:11.517 ************************************ 00:33:11.517 END TEST nvmf_ns_hotplug_stress 00:33:11.517 ************************************ 00:33:11.517 08:29:44 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:33:11.517 08:29:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:11.517 08:29:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:11.517 08:29:44 -- common/autotest_common.sh@10 -- # set +x 00:33:11.517 ************************************ 00:33:11.517 START TEST nvmf_connect_stress 00:33:11.517 ************************************ 00:33:11.517 08:29:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:33:11.777 * Looking for test storage... 00:33:11.777 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:11.777 08:29:44 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:11.777 08:29:44 -- nvmf/common.sh@7 -- # uname -s 00:33:11.777 08:29:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:11.778 08:29:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:11.778 08:29:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:11.778 08:29:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:11.778 08:29:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:11.778 08:29:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:11.778 08:29:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:11.778 08:29:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:11.778 08:29:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:11.778 08:29:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:11.778 08:29:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:33:11.778 08:29:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:33:11.778 08:29:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:11.778 08:29:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:11.778 08:29:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:11.778 08:29:44 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:11.778 08:29:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:11.778 08:29:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:11.778 08:29:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:11.778 08:29:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.778 08:29:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.778 08:29:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.778 08:29:44 -- paths/export.sh@5 -- # export PATH 00:33:11.778 08:29:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.778 08:29:44 -- nvmf/common.sh@46 -- # : 0 00:33:11.778 08:29:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:11.778 08:29:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:11.778 08:29:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:11.778 08:29:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:11.778 08:29:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:11.778 08:29:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:11.778 08:29:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:11.778 08:29:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:11.778 08:29:44 -- target/connect_stress.sh@12 -- # nvmftestinit 00:33:11.778 08:29:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:11.778 08:29:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:11.778 08:29:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:11.778 08:29:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:11.778 08:29:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:11.778 08:29:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:11.778 08:29:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:11.778 08:29:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:11.778 08:29:45 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:33:11.778 08:29:45 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:33:11.778 08:29:45 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:33:11.778 08:29:45 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:33:11.778 08:29:45 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:33:11.778 08:29:45 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:33:11.778 08:29:45 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:11.778 08:29:45 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:11.778 08:29:45 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:33:11.778 08:29:45 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:33:11.778 08:29:45 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:11.778 08:29:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:11.778 08:29:45 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:11.778 08:29:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:11.778 08:29:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:11.778 08:29:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:11.778 08:29:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:11.778 08:29:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:11.778 08:29:45 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:33:11.778 08:29:45 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:33:11.778 Cannot find device "nvmf_tgt_br" 00:33:11.778 08:29:45 -- nvmf/common.sh@154 -- # true 00:33:11.778 08:29:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:33:11.778 Cannot find device "nvmf_tgt_br2" 00:33:11.778 08:29:45 -- nvmf/common.sh@155 -- # true 00:33:11.778 08:29:45 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:33:11.778 08:29:45 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:33:11.778 Cannot find device "nvmf_tgt_br" 00:33:11.778 08:29:45 -- nvmf/common.sh@157 -- # true 00:33:11.778 08:29:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:33:11.778 Cannot find device "nvmf_tgt_br2" 00:33:11.778 08:29:45 -- nvmf/common.sh@158 -- # true 00:33:11.778 08:29:45 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:33:12.038 08:29:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:33:12.038 08:29:45 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:12.038 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:12.038 08:29:45 -- nvmf/common.sh@161 -- # true 00:33:12.038 08:29:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:12.038 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:12.038 08:29:45 -- nvmf/common.sh@162 -- # true 00:33:12.038 08:29:45 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:33:12.038 08:29:45 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:12.038 08:29:45 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:12.038 08:29:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:12.038 08:29:45 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:12.038 08:29:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:12.038 08:29:45 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:12.038 08:29:45 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:33:12.038 08:29:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:33:12.038 08:29:45 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:33:12.038 08:29:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:33:12.038 08:29:45 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:33:12.038 08:29:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:33:12.038 08:29:45 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:12.038 08:29:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:12.038 08:29:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:12.038 08:29:45 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:33:12.038 08:29:45 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:33:12.038 08:29:45 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:33:12.038 08:29:45 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:12.038 08:29:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:12.038 08:29:45 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:12.038 08:29:45 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:12.038 08:29:45 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:33:12.038 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:12.038 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:33:12.038 00:33:12.038 --- 10.0.0.2 ping statistics --- 00:33:12.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:12.038 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:33:12.038 08:29:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:33:12.038 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:12.038 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:33:12.038 00:33:12.038 --- 10.0.0.3 ping statistics --- 00:33:12.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:12.038 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:33:12.038 08:29:45 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:12.038 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:12.038 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:33:12.038 00:33:12.038 --- 10.0.0.1 ping statistics --- 00:33:12.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:12.038 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:33:12.038 08:29:45 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:12.038 08:29:45 -- nvmf/common.sh@421 -- # return 0 00:33:12.038 08:29:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:33:12.038 08:29:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:12.038 08:29:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:12.038 08:29:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:12.038 08:29:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:12.038 08:29:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:12.038 08:29:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:12.297 08:29:45 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:33:12.297 08:29:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:12.297 08:29:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:12.297 08:29:45 -- common/autotest_common.sh@10 -- # set +x 00:33:12.297 08:29:45 -- nvmf/common.sh@469 -- # nvmfpid=68425 00:33:12.297 08:29:45 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:12.297 08:29:45 -- nvmf/common.sh@470 -- # waitforlisten 68425 00:33:12.297 08:29:45 -- common/autotest_common.sh@819 -- # '[' -z 68425 ']' 00:33:12.297 08:29:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:12.297 08:29:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:12.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:12.297 08:29:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:12.297 08:29:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:12.297 08:29:45 -- common/autotest_common.sh@10 -- # set +x 00:33:12.297 [2024-04-17 08:29:45.441371] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:33:12.297 [2024-04-17 08:29:45.441456] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:12.297 [2024-04-17 08:29:45.569472] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:12.557 [2024-04-17 08:29:45.670723] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:12.557 [2024-04-17 08:29:45.670888] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:12.557 [2024-04-17 08:29:45.670898] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:12.557 [2024-04-17 08:29:45.670904] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:12.557 [2024-04-17 08:29:45.671148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:12.557 [2024-04-17 08:29:45.671230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:12.557 [2024-04-17 08:29:45.671230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:13.126 08:29:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:13.126 08:29:46 -- common/autotest_common.sh@852 -- # return 0 00:33:13.126 08:29:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:13.126 08:29:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:13.126 08:29:46 -- common/autotest_common.sh@10 -- # set +x 00:33:13.126 08:29:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:13.126 08:29:46 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:13.126 08:29:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:13.126 08:29:46 -- common/autotest_common.sh@10 -- # set +x 00:33:13.126 [2024-04-17 08:29:46.380820] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:13.126 08:29:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:13.126 08:29:46 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:13.126 08:29:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:13.126 08:29:46 -- common/autotest_common.sh@10 -- # set +x 00:33:13.126 08:29:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:13.126 08:29:46 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:13.126 08:29:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:13.126 08:29:46 -- common/autotest_common.sh@10 -- # set +x 00:33:13.126 [2024-04-17 08:29:46.405972] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:13.126 08:29:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:13.126 08:29:46 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:33:13.126 08:29:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:13.126 08:29:46 -- common/autotest_common.sh@10 -- # set +x 00:33:13.126 NULL1 00:33:13.126 08:29:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:13.126 08:29:46 -- target/connect_stress.sh@21 -- # PERF_PID=68477 00:33:13.126 08:29:46 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:33:13.126 08:29:46 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:33:13.126 08:29:46 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:33:13.126 08:29:46 -- target/connect_stress.sh@27 -- # seq 1 20 00:33:13.126 08:29:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:33:13.126 08:29:46 -- target/connect_stress.sh@28 -- # cat 00:33:13.126 08:29:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:33:13.126 08:29:46 -- target/connect_stress.sh@28 -- # cat 00:33:13.126 08:29:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:33:13.126 08:29:46 -- target/connect_stress.sh@28 -- # cat 00:33:13.126 08:29:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:33:13.126 08:29:46 -- target/connect_stress.sh@28 -- # cat 00:33:13.126 08:29:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:33:13.126 08:29:46 -- target/connect_stress.sh@28 -- # cat 00:33:13.126 08:29:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:33:13.126 08:29:46 -- target/connect_stress.sh@28 -- # cat 00:33:13.126 08:29:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:33:13.126 08:29:46 -- target/connect_stress.sh@28 -- # cat 00:33:13.126 08:29:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:33:13.126 08:29:46 -- target/connect_stress.sh@28 -- # cat 00:33:13.386 08:29:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:33:13.386 08:29:46 -- target/connect_stress.sh@28 -- # cat 00:33:13.386 08:29:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:33:13.386 08:29:46 -- target/connect_stress.sh@28 -- # cat 00:33:13.386 08:29:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:33:13.386 08:29:46 -- target/connect_stress.sh@28 -- # cat 00:33:13.386 08:29:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:33:13.386 08:29:46 -- target/connect_stress.sh@28 -- # cat 00:33:13.386 08:29:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:33:13.386 08:29:46 -- target/connect_stress.sh@28 -- # cat 00:33:13.386 08:29:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:33:13.386 08:29:46 -- target/connect_stress.sh@28 -- # cat 00:33:13.386 08:29:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:33:13.386 08:29:46 -- target/connect_stress.sh@28 -- # cat 00:33:13.386 08:29:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:33:13.386 08:29:46 -- target/connect_stress.sh@28 -- # cat 00:33:13.386 08:29:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:33:13.386 08:29:46 -- target/connect_stress.sh@28 -- # cat 00:33:13.386 08:29:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:33:13.386 08:29:46 -- target/connect_stress.sh@28 -- # cat 00:33:13.386 08:29:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:33:13.386 08:29:46 -- target/connect_stress.sh@28 -- # cat 00:33:13.386 08:29:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:33:13.386 08:29:46 -- target/connect_stress.sh@28 -- # cat 00:33:13.386 08:29:46 -- target/connect_stress.sh@34 -- # kill -0 68477 00:33:13.386 08:29:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:33:13.386 08:29:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:13.386 08:29:46 -- common/autotest_common.sh@10 -- # set +x 00:33:13.646 08:29:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:13.646 08:29:46 -- target/connect_stress.sh@34 -- # kill -0 68477 00:33:13.646 08:29:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:33:13.646 08:29:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:13.646 08:29:46 -- common/autotest_common.sh@10 -- # set +x 00:33:13.904 08:29:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:13.904 08:29:47 -- target/connect_stress.sh@34 -- # kill -0 68477 00:33:13.904 08:29:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:33:13.904 08:29:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:13.904 08:29:47 -- common/autotest_common.sh@10 -- # set +x 00:33:14.163 08:29:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:14.163 08:29:47 -- target/connect_stress.sh@34 -- # kill -0 68477 00:33:14.163 08:29:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:33:14.163 08:29:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:14.163 08:29:47 -- common/autotest_common.sh@10 -- # set +x 00:33:14.730 08:29:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:14.730 08:29:47 -- target/connect_stress.sh@34 -- # kill -0 68477 00:33:14.730 08:29:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:33:14.730 08:29:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:14.730 08:29:47 -- common/autotest_common.sh@10 -- # set +x 00:33:14.989 08:29:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:14.989 08:29:48 -- target/connect_stress.sh@34 -- # kill -0 68477 00:33:14.989 08:29:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:33:14.989 08:29:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:14.989 08:29:48 -- common/autotest_common.sh@10 -- # set +x 00:33:15.249 08:29:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:15.249 08:29:48 -- target/connect_stress.sh@34 -- # kill -0 68477 00:33:15.249 08:29:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:33:15.249 08:29:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:15.249 08:29:48 -- common/autotest_common.sh@10 -- # set +x 00:33:15.508 08:29:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:15.508 08:29:48 -- target/connect_stress.sh@34 -- # kill -0 68477 00:33:15.508 08:29:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:33:15.508 08:29:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:15.508 08:29:48 -- common/autotest_common.sh@10 -- # set +x 00:33:16.075 08:29:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:16.075 08:29:49 -- target/connect_stress.sh@34 -- # kill -0 68477 00:33:16.075 08:29:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:33:16.075 08:29:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:16.075 08:29:49 -- common/autotest_common.sh@10 -- # set +x 00:33:16.334 08:29:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:16.334 08:29:49 -- target/connect_stress.sh@34 -- # kill -0 68477 00:33:16.334 08:29:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:33:16.334 08:29:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:16.334 08:29:49 -- common/autotest_common.sh@10 -- # set +x 00:33:16.592 08:29:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:16.592 08:29:49 -- target/connect_stress.sh@34 -- # kill -0 68477 00:33:16.592 08:29:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:33:16.592 08:29:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:16.592 08:29:49 -- common/autotest_common.sh@10 -- # set +x 00:33:16.852 08:29:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:16.852 08:29:50 -- target/connect_stress.sh@34 -- # kill -0 68477 00:33:16.852 08:29:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:33:16.852 08:29:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:16.852 08:29:50 -- common/autotest_common.sh@10 -- # set +x 00:33:17.110 08:29:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:17.110 08:29:50 -- target/connect_stress.sh@34 -- # kill -0 68477 00:33:17.110 08:29:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:33:17.110 08:29:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:17.110 08:29:50 -- common/autotest_common.sh@10 -- # set +x 00:33:17.688 08:29:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:17.688 08:29:50 -- target/connect_stress.sh@34 -- # kill -0 68477 00:33:17.688 08:29:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:33:17.688 08:29:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:17.688 08:29:50 -- common/autotest_common.sh@10 -- # set +x 00:33:17.947 08:29:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:17.947 08:29:51 -- target/connect_stress.sh@34 -- # kill -0 68477 00:33:17.947 08:29:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:33:17.947 08:29:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:17.947 08:29:51 -- common/autotest_common.sh@10 -- # set +x 00:33:18.205 08:29:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.205 08:29:51 -- target/connect_stress.sh@34 -- # kill -0 68477 00:33:18.205 08:29:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:33:18.205 08:29:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.205 08:29:51 -- common/autotest_common.sh@10 -- # set +x 00:33:18.463 08:29:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.463 08:29:51 -- target/connect_stress.sh@34 -- # kill -0 68477 00:33:18.463 08:29:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:33:18.463 08:29:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.463 08:29:51 -- common/autotest_common.sh@10 -- # set +x 00:33:18.722 08:29:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.722 08:29:52 -- target/connect_stress.sh@34 -- # kill -0 68477 00:33:18.722 08:29:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:33:18.722 08:29:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.722 08:29:52 -- common/autotest_common.sh@10 -- # set +x 00:33:19.289 08:29:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:19.289 08:29:52 -- target/connect_stress.sh@34 -- # kill -0 68477 00:33:19.290 08:29:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:33:19.290 08:29:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:19.290 08:29:52 -- common/autotest_common.sh@10 -- # set +x 00:33:19.548 08:29:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:19.548 08:29:52 -- target/connect_stress.sh@34 -- # kill -0 68477 00:33:19.548 08:29:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:33:19.548 08:29:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:19.548 08:29:52 -- common/autotest_common.sh@10 -- # set +x 00:33:19.808 08:29:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:19.808 08:29:53 -- target/connect_stress.sh@34 -- # kill -0 68477 00:33:19.808 08:29:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:33:19.808 08:29:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:19.808 08:29:53 -- common/autotest_common.sh@10 -- # set +x 00:33:20.066 08:29:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:20.066 08:29:53 -- target/connect_stress.sh@34 -- # kill -0 68477 00:33:20.066 08:29:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:33:20.066 08:29:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:20.066 08:29:53 -- common/autotest_common.sh@10 -- # set +x 00:33:20.634 08:29:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:20.634 08:29:53 -- target/connect_stress.sh@34 -- # kill -0 68477 00:33:20.634 08:29:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:33:20.634 08:29:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:20.634 08:29:53 -- common/autotest_common.sh@10 -- # set +x 00:33:20.894 08:29:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:20.894 08:29:53 -- target/connect_stress.sh@34 -- # kill -0 68477 00:33:20.894 08:29:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:33:20.894 08:29:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:20.894 08:29:53 -- common/autotest_common.sh@10 -- # set +x 00:33:21.152 08:29:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:21.152 08:29:54 -- target/connect_stress.sh@34 -- # kill -0 68477 00:33:21.152 08:29:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:33:21.152 08:29:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:21.152 08:29:54 -- common/autotest_common.sh@10 -- # set +x 00:33:21.411 08:29:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:21.411 08:29:54 -- target/connect_stress.sh@34 -- # kill -0 68477 00:33:21.411 08:29:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:33:21.411 08:29:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:21.411 08:29:54 -- common/autotest_common.sh@10 -- # set +x 00:33:21.670 08:29:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:21.670 08:29:54 -- target/connect_stress.sh@34 -- # kill -0 68477 00:33:21.670 08:29:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:33:21.670 08:29:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:21.670 08:29:54 -- common/autotest_common.sh@10 -- # set +x 00:33:22.257 08:29:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:22.257 08:29:55 -- target/connect_stress.sh@34 -- # kill -0 68477 00:33:22.257 08:29:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:33:22.257 08:29:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:22.257 08:29:55 -- common/autotest_common.sh@10 -- # set +x 00:33:22.516 08:29:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:22.516 08:29:55 -- target/connect_stress.sh@34 -- # kill -0 68477 00:33:22.516 08:29:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:33:22.516 08:29:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:22.516 08:29:55 -- common/autotest_common.sh@10 -- # set +x 00:33:22.777 08:29:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:22.777 08:29:55 -- target/connect_stress.sh@34 -- # kill -0 68477 00:33:22.777 08:29:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:33:22.777 08:29:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:22.777 08:29:55 -- common/autotest_common.sh@10 -- # set +x 00:33:23.037 08:29:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:23.037 08:29:56 -- target/connect_stress.sh@34 -- # kill -0 68477 00:33:23.037 08:29:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:33:23.037 08:29:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:23.037 08:29:56 -- common/autotest_common.sh@10 -- # set +x 00:33:23.296 08:29:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:23.296 08:29:56 -- target/connect_stress.sh@34 -- # kill -0 68477 00:33:23.296 08:29:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:33:23.296 08:29:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:23.296 08:29:56 -- common/autotest_common.sh@10 -- # set +x 00:33:23.554 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:23.814 08:29:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:23.814 08:29:56 -- target/connect_stress.sh@34 -- # kill -0 68477 00:33:23.814 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (68477) - No such process 00:33:23.814 08:29:56 -- target/connect_stress.sh@38 -- # wait 68477 00:33:23.814 08:29:56 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:33:23.814 08:29:56 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:33:23.814 08:29:56 -- target/connect_stress.sh@43 -- # nvmftestfini 00:33:23.814 08:29:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:23.814 08:29:56 -- nvmf/common.sh@116 -- # sync 00:33:23.814 08:29:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:23.814 08:29:56 -- nvmf/common.sh@119 -- # set +e 00:33:23.814 08:29:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:23.814 08:29:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:23.814 rmmod nvme_tcp 00:33:23.814 rmmod nvme_fabrics 00:33:23.814 rmmod nvme_keyring 00:33:23.814 08:29:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:23.814 08:29:57 -- nvmf/common.sh@123 -- # set -e 00:33:23.814 08:29:57 -- nvmf/common.sh@124 -- # return 0 00:33:23.814 08:29:57 -- nvmf/common.sh@477 -- # '[' -n 68425 ']' 00:33:23.814 08:29:57 -- nvmf/common.sh@478 -- # killprocess 68425 00:33:23.814 08:29:57 -- common/autotest_common.sh@926 -- # '[' -z 68425 ']' 00:33:23.814 08:29:57 -- common/autotest_common.sh@930 -- # kill -0 68425 00:33:23.814 08:29:57 -- common/autotest_common.sh@931 -- # uname 00:33:23.814 08:29:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:23.814 08:29:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68425 00:33:23.814 08:29:57 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:33:23.814 08:29:57 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:33:23.814 08:29:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68425' 00:33:23.814 killing process with pid 68425 00:33:23.814 08:29:57 -- common/autotest_common.sh@945 -- # kill 68425 00:33:23.814 08:29:57 -- common/autotest_common.sh@950 -- # wait 68425 00:33:24.383 08:29:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:33:24.383 08:29:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:24.383 08:29:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:24.383 08:29:57 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:24.383 08:29:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:24.383 08:29:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:24.383 08:29:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:24.383 08:29:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:24.383 08:29:57 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:33:24.383 00:33:24.383 real 0m12.615s 00:33:24.383 user 0m42.461s 00:33:24.383 sys 0m2.625s 00:33:24.383 08:29:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:24.383 08:29:57 -- common/autotest_common.sh@10 -- # set +x 00:33:24.383 ************************************ 00:33:24.383 END TEST nvmf_connect_stress 00:33:24.383 ************************************ 00:33:24.383 08:29:57 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:33:24.383 08:29:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:24.383 08:29:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:24.383 08:29:57 -- common/autotest_common.sh@10 -- # set +x 00:33:24.383 ************************************ 00:33:24.383 START TEST nvmf_fused_ordering 00:33:24.383 ************************************ 00:33:24.383 08:29:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:33:24.383 * Looking for test storage... 00:33:24.383 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:24.383 08:29:57 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:24.383 08:29:57 -- nvmf/common.sh@7 -- # uname -s 00:33:24.383 08:29:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:24.383 08:29:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:24.383 08:29:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:24.383 08:29:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:24.383 08:29:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:24.383 08:29:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:24.383 08:29:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:24.383 08:29:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:24.383 08:29:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:24.383 08:29:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:24.383 08:29:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:33:24.383 08:29:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:33:24.383 08:29:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:24.383 08:29:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:24.383 08:29:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:24.383 08:29:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:24.383 08:29:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:24.383 08:29:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:24.383 08:29:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:24.383 08:29:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.383 08:29:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.383 08:29:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.383 08:29:57 -- paths/export.sh@5 -- # export PATH 00:33:24.383 08:29:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.383 08:29:57 -- nvmf/common.sh@46 -- # : 0 00:33:24.383 08:29:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:24.383 08:29:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:24.383 08:29:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:24.383 08:29:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:24.383 08:29:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:24.383 08:29:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:24.383 08:29:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:24.383 08:29:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:24.383 08:29:57 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:33:24.383 08:29:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:24.383 08:29:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:24.383 08:29:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:24.383 08:29:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:24.383 08:29:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:24.383 08:29:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:24.383 08:29:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:24.383 08:29:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:24.383 08:29:57 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:33:24.383 08:29:57 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:33:24.383 08:29:57 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:33:24.383 08:29:57 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:33:24.384 08:29:57 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:33:24.384 08:29:57 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:33:24.384 08:29:57 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:24.384 08:29:57 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:24.384 08:29:57 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:33:24.384 08:29:57 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:33:24.384 08:29:57 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:24.384 08:29:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:24.384 08:29:57 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:24.384 08:29:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:24.384 08:29:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:24.384 08:29:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:24.384 08:29:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:24.384 08:29:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:24.384 08:29:57 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:33:24.384 08:29:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:33:24.642 Cannot find device "nvmf_tgt_br" 00:33:24.642 08:29:57 -- nvmf/common.sh@154 -- # true 00:33:24.642 08:29:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:33:24.642 Cannot find device "nvmf_tgt_br2" 00:33:24.642 08:29:57 -- nvmf/common.sh@155 -- # true 00:33:24.642 08:29:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:33:24.642 08:29:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:33:24.642 Cannot find device "nvmf_tgt_br" 00:33:24.642 08:29:57 -- nvmf/common.sh@157 -- # true 00:33:24.642 08:29:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:33:24.642 Cannot find device "nvmf_tgt_br2" 00:33:24.642 08:29:57 -- nvmf/common.sh@158 -- # true 00:33:24.642 08:29:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:33:24.642 08:29:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:33:24.642 08:29:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:24.642 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:24.642 08:29:57 -- nvmf/common.sh@161 -- # true 00:33:24.642 08:29:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:24.642 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:24.642 08:29:57 -- nvmf/common.sh@162 -- # true 00:33:24.642 08:29:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:33:24.642 08:29:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:24.642 08:29:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:24.642 08:29:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:24.642 08:29:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:24.642 08:29:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:24.642 08:29:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:24.642 08:29:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:33:24.642 08:29:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:33:24.642 08:29:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:33:24.642 08:29:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:33:24.642 08:29:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:33:24.642 08:29:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:33:24.642 08:29:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:24.642 08:29:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:24.642 08:29:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:24.642 08:29:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:33:24.642 08:29:57 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:33:24.642 08:29:57 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:33:24.901 08:29:57 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:24.901 08:29:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:24.901 08:29:57 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:24.902 08:29:57 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:24.902 08:29:58 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:33:24.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:24.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:33:24.902 00:33:24.902 --- 10.0.0.2 ping statistics --- 00:33:24.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:24.902 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:33:24.902 08:29:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:33:24.902 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:24.902 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:33:24.902 00:33:24.902 --- 10.0.0.3 ping statistics --- 00:33:24.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:24.902 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:33:24.902 08:29:58 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:24.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:24.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:33:24.902 00:33:24.902 --- 10.0.0.1 ping statistics --- 00:33:24.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:24.902 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:33:24.902 08:29:58 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:24.902 08:29:58 -- nvmf/common.sh@421 -- # return 0 00:33:24.902 08:29:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:33:24.902 08:29:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:24.902 08:29:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:24.902 08:29:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:24.902 08:29:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:24.902 08:29:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:24.902 08:29:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:24.902 08:29:58 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:33:24.902 08:29:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:24.902 08:29:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:24.902 08:29:58 -- common/autotest_common.sh@10 -- # set +x 00:33:24.902 08:29:58 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:24.902 08:29:58 -- nvmf/common.sh@469 -- # nvmfpid=68814 00:33:24.902 08:29:58 -- nvmf/common.sh@470 -- # waitforlisten 68814 00:33:24.902 08:29:58 -- common/autotest_common.sh@819 -- # '[' -z 68814 ']' 00:33:24.902 08:29:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:24.902 08:29:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:24.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:24.902 08:29:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:24.902 08:29:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:24.902 08:29:58 -- common/autotest_common.sh@10 -- # set +x 00:33:24.902 [2024-04-17 08:29:58.107185] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:33:24.902 [2024-04-17 08:29:58.107269] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:25.161 [2024-04-17 08:29:58.235457] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.161 [2024-04-17 08:29:58.332294] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:25.161 [2024-04-17 08:29:58.332459] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:25.161 [2024-04-17 08:29:58.332468] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:25.161 [2024-04-17 08:29:58.332474] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:25.161 [2024-04-17 08:29:58.332498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:25.728 08:29:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:25.728 08:29:58 -- common/autotest_common.sh@852 -- # return 0 00:33:25.728 08:29:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:25.728 08:29:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:25.728 08:29:58 -- common/autotest_common.sh@10 -- # set +x 00:33:25.728 08:29:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:25.728 08:29:59 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:25.728 08:29:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:25.728 08:29:59 -- common/autotest_common.sh@10 -- # set +x 00:33:25.728 [2024-04-17 08:29:59.039933] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:25.728 08:29:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:25.728 08:29:59 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:25.728 08:29:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:25.728 08:29:59 -- common/autotest_common.sh@10 -- # set +x 00:33:25.728 08:29:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:25.728 08:29:59 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:25.728 08:29:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:25.728 08:29:59 -- common/autotest_common.sh@10 -- # set +x 00:33:25.985 [2024-04-17 08:29:59.063985] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:25.985 08:29:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:25.985 08:29:59 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:33:25.985 08:29:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:25.985 08:29:59 -- common/autotest_common.sh@10 -- # set +x 00:33:25.985 NULL1 00:33:25.985 08:29:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:25.985 08:29:59 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:33:25.985 08:29:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:25.985 08:29:59 -- common/autotest_common.sh@10 -- # set +x 00:33:25.985 08:29:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:25.985 08:29:59 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:33:25.985 08:29:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:25.985 08:29:59 -- common/autotest_common.sh@10 -- # set +x 00:33:25.985 08:29:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:25.985 08:29:59 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:25.985 [2024-04-17 08:29:59.134706] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:33:25.985 [2024-04-17 08:29:59.134738] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68866 ] 00:33:26.243 Attached to nqn.2016-06.io.spdk:cnode1 00:33:26.243 Namespace ID: 1 size: 1GB 00:33:26.243 fused_ordering(0) 00:33:26.243 fused_ordering(1) 00:33:26.243 fused_ordering(2) 00:33:26.243 fused_ordering(3) 00:33:26.243 fused_ordering(4) 00:33:26.243 fused_ordering(5) 00:33:26.243 fused_ordering(6) 00:33:26.243 fused_ordering(7) 00:33:26.243 fused_ordering(8) 00:33:26.243 fused_ordering(9) 00:33:26.243 fused_ordering(10) 00:33:26.243 fused_ordering(11) 00:33:26.243 fused_ordering(12) 00:33:26.243 fused_ordering(13) 00:33:26.243 fused_ordering(14) 00:33:26.243 fused_ordering(15) 00:33:26.243 fused_ordering(16) 00:33:26.243 fused_ordering(17) 00:33:26.243 fused_ordering(18) 00:33:26.243 fused_ordering(19) 00:33:26.243 fused_ordering(20) 00:33:26.243 fused_ordering(21) 00:33:26.243 fused_ordering(22) 00:33:26.243 fused_ordering(23) 00:33:26.243 fused_ordering(24) 00:33:26.243 fused_ordering(25) 00:33:26.243 fused_ordering(26) 00:33:26.244 fused_ordering(27) 00:33:26.244 fused_ordering(28) 00:33:26.244 fused_ordering(29) 00:33:26.244 fused_ordering(30) 00:33:26.244 fused_ordering(31) 00:33:26.244 fused_ordering(32) 00:33:26.244 fused_ordering(33) 00:33:26.244 fused_ordering(34) 00:33:26.244 fused_ordering(35) 00:33:26.244 fused_ordering(36) 00:33:26.244 fused_ordering(37) 00:33:26.244 fused_ordering(38) 00:33:26.244 fused_ordering(39) 00:33:26.244 fused_ordering(40) 00:33:26.244 fused_ordering(41) 00:33:26.244 fused_ordering(42) 00:33:26.244 fused_ordering(43) 00:33:26.244 fused_ordering(44) 00:33:26.244 fused_ordering(45) 00:33:26.244 fused_ordering(46) 00:33:26.244 fused_ordering(47) 00:33:26.244 fused_ordering(48) 00:33:26.244 fused_ordering(49) 00:33:26.244 fused_ordering(50) 00:33:26.244 fused_ordering(51) 00:33:26.244 fused_ordering(52) 00:33:26.244 fused_ordering(53) 00:33:26.244 fused_ordering(54) 00:33:26.244 fused_ordering(55) 00:33:26.244 fused_ordering(56) 00:33:26.244 fused_ordering(57) 00:33:26.244 fused_ordering(58) 00:33:26.244 fused_ordering(59) 00:33:26.244 fused_ordering(60) 00:33:26.244 fused_ordering(61) 00:33:26.244 fused_ordering(62) 00:33:26.244 fused_ordering(63) 00:33:26.244 fused_ordering(64) 00:33:26.244 fused_ordering(65) 00:33:26.244 fused_ordering(66) 00:33:26.244 fused_ordering(67) 00:33:26.244 fused_ordering(68) 00:33:26.244 fused_ordering(69) 00:33:26.244 fused_ordering(70) 00:33:26.244 fused_ordering(71) 00:33:26.244 fused_ordering(72) 00:33:26.244 fused_ordering(73) 00:33:26.244 fused_ordering(74) 00:33:26.244 fused_ordering(75) 00:33:26.244 fused_ordering(76) 00:33:26.244 fused_ordering(77) 00:33:26.244 fused_ordering(78) 00:33:26.244 fused_ordering(79) 00:33:26.244 fused_ordering(80) 00:33:26.244 fused_ordering(81) 00:33:26.244 fused_ordering(82) 00:33:26.244 fused_ordering(83) 00:33:26.244 fused_ordering(84) 00:33:26.244 fused_ordering(85) 00:33:26.244 fused_ordering(86) 00:33:26.244 fused_ordering(87) 00:33:26.244 fused_ordering(88) 00:33:26.244 fused_ordering(89) 00:33:26.244 fused_ordering(90) 00:33:26.244 fused_ordering(91) 00:33:26.244 fused_ordering(92) 00:33:26.244 fused_ordering(93) 00:33:26.244 fused_ordering(94) 00:33:26.244 fused_ordering(95) 00:33:26.244 fused_ordering(96) 00:33:26.244 fused_ordering(97) 00:33:26.244 fused_ordering(98) 00:33:26.244 fused_ordering(99) 00:33:26.244 fused_ordering(100) 00:33:26.244 fused_ordering(101) 00:33:26.244 fused_ordering(102) 00:33:26.244 fused_ordering(103) 00:33:26.244 fused_ordering(104) 00:33:26.244 fused_ordering(105) 00:33:26.244 fused_ordering(106) 00:33:26.244 fused_ordering(107) 00:33:26.244 fused_ordering(108) 00:33:26.244 fused_ordering(109) 00:33:26.244 fused_ordering(110) 00:33:26.244 fused_ordering(111) 00:33:26.244 fused_ordering(112) 00:33:26.244 fused_ordering(113) 00:33:26.244 fused_ordering(114) 00:33:26.244 fused_ordering(115) 00:33:26.244 fused_ordering(116) 00:33:26.244 fused_ordering(117) 00:33:26.244 fused_ordering(118) 00:33:26.244 fused_ordering(119) 00:33:26.244 fused_ordering(120) 00:33:26.244 fused_ordering(121) 00:33:26.244 fused_ordering(122) 00:33:26.244 fused_ordering(123) 00:33:26.244 fused_ordering(124) 00:33:26.244 fused_ordering(125) 00:33:26.244 fused_ordering(126) 00:33:26.244 fused_ordering(127) 00:33:26.244 fused_ordering(128) 00:33:26.244 fused_ordering(129) 00:33:26.244 fused_ordering(130) 00:33:26.244 fused_ordering(131) 00:33:26.244 fused_ordering(132) 00:33:26.244 fused_ordering(133) 00:33:26.244 fused_ordering(134) 00:33:26.244 fused_ordering(135) 00:33:26.244 fused_ordering(136) 00:33:26.244 fused_ordering(137) 00:33:26.244 fused_ordering(138) 00:33:26.244 fused_ordering(139) 00:33:26.244 fused_ordering(140) 00:33:26.244 fused_ordering(141) 00:33:26.244 fused_ordering(142) 00:33:26.244 fused_ordering(143) 00:33:26.244 fused_ordering(144) 00:33:26.244 fused_ordering(145) 00:33:26.244 fused_ordering(146) 00:33:26.244 fused_ordering(147) 00:33:26.244 fused_ordering(148) 00:33:26.244 fused_ordering(149) 00:33:26.244 fused_ordering(150) 00:33:26.244 fused_ordering(151) 00:33:26.244 fused_ordering(152) 00:33:26.244 fused_ordering(153) 00:33:26.244 fused_ordering(154) 00:33:26.244 fused_ordering(155) 00:33:26.244 fused_ordering(156) 00:33:26.244 fused_ordering(157) 00:33:26.244 fused_ordering(158) 00:33:26.244 fused_ordering(159) 00:33:26.244 fused_ordering(160) 00:33:26.244 fused_ordering(161) 00:33:26.244 fused_ordering(162) 00:33:26.244 fused_ordering(163) 00:33:26.244 fused_ordering(164) 00:33:26.244 fused_ordering(165) 00:33:26.244 fused_ordering(166) 00:33:26.244 fused_ordering(167) 00:33:26.244 fused_ordering(168) 00:33:26.244 fused_ordering(169) 00:33:26.244 fused_ordering(170) 00:33:26.244 fused_ordering(171) 00:33:26.244 fused_ordering(172) 00:33:26.244 fused_ordering(173) 00:33:26.244 fused_ordering(174) 00:33:26.244 fused_ordering(175) 00:33:26.244 fused_ordering(176) 00:33:26.244 fused_ordering(177) 00:33:26.244 fused_ordering(178) 00:33:26.244 fused_ordering(179) 00:33:26.244 fused_ordering(180) 00:33:26.244 fused_ordering(181) 00:33:26.244 fused_ordering(182) 00:33:26.244 fused_ordering(183) 00:33:26.244 fused_ordering(184) 00:33:26.244 fused_ordering(185) 00:33:26.244 fused_ordering(186) 00:33:26.244 fused_ordering(187) 00:33:26.244 fused_ordering(188) 00:33:26.244 fused_ordering(189) 00:33:26.244 fused_ordering(190) 00:33:26.244 fused_ordering(191) 00:33:26.244 fused_ordering(192) 00:33:26.244 fused_ordering(193) 00:33:26.244 fused_ordering(194) 00:33:26.244 fused_ordering(195) 00:33:26.244 fused_ordering(196) 00:33:26.244 fused_ordering(197) 00:33:26.244 fused_ordering(198) 00:33:26.244 fused_ordering(199) 00:33:26.244 fused_ordering(200) 00:33:26.244 fused_ordering(201) 00:33:26.244 fused_ordering(202) 00:33:26.244 fused_ordering(203) 00:33:26.244 fused_ordering(204) 00:33:26.244 fused_ordering(205) 00:33:26.502 fused_ordering(206) 00:33:26.502 fused_ordering(207) 00:33:26.502 fused_ordering(208) 00:33:26.502 fused_ordering(209) 00:33:26.502 fused_ordering(210) 00:33:26.502 fused_ordering(211) 00:33:26.502 fused_ordering(212) 00:33:26.502 fused_ordering(213) 00:33:26.502 fused_ordering(214) 00:33:26.502 fused_ordering(215) 00:33:26.502 fused_ordering(216) 00:33:26.502 fused_ordering(217) 00:33:26.502 fused_ordering(218) 00:33:26.502 fused_ordering(219) 00:33:26.502 fused_ordering(220) 00:33:26.502 fused_ordering(221) 00:33:26.502 fused_ordering(222) 00:33:26.502 fused_ordering(223) 00:33:26.502 fused_ordering(224) 00:33:26.502 fused_ordering(225) 00:33:26.502 fused_ordering(226) 00:33:26.502 fused_ordering(227) 00:33:26.502 fused_ordering(228) 00:33:26.502 fused_ordering(229) 00:33:26.502 fused_ordering(230) 00:33:26.502 fused_ordering(231) 00:33:26.502 fused_ordering(232) 00:33:26.502 fused_ordering(233) 00:33:26.502 fused_ordering(234) 00:33:26.502 fused_ordering(235) 00:33:26.502 fused_ordering(236) 00:33:26.502 fused_ordering(237) 00:33:26.502 fused_ordering(238) 00:33:26.502 fused_ordering(239) 00:33:26.502 fused_ordering(240) 00:33:26.502 fused_ordering(241) 00:33:26.502 fused_ordering(242) 00:33:26.502 fused_ordering(243) 00:33:26.502 fused_ordering(244) 00:33:26.502 fused_ordering(245) 00:33:26.502 fused_ordering(246) 00:33:26.502 fused_ordering(247) 00:33:26.502 fused_ordering(248) 00:33:26.502 fused_ordering(249) 00:33:26.502 fused_ordering(250) 00:33:26.502 fused_ordering(251) 00:33:26.502 fused_ordering(252) 00:33:26.502 fused_ordering(253) 00:33:26.502 fused_ordering(254) 00:33:26.502 fused_ordering(255) 00:33:26.502 fused_ordering(256) 00:33:26.502 fused_ordering(257) 00:33:26.502 fused_ordering(258) 00:33:26.502 fused_ordering(259) 00:33:26.502 fused_ordering(260) 00:33:26.502 fused_ordering(261) 00:33:26.502 fused_ordering(262) 00:33:26.502 fused_ordering(263) 00:33:26.502 fused_ordering(264) 00:33:26.502 fused_ordering(265) 00:33:26.502 fused_ordering(266) 00:33:26.503 fused_ordering(267) 00:33:26.503 fused_ordering(268) 00:33:26.503 fused_ordering(269) 00:33:26.503 fused_ordering(270) 00:33:26.503 fused_ordering(271) 00:33:26.503 fused_ordering(272) 00:33:26.503 fused_ordering(273) 00:33:26.503 fused_ordering(274) 00:33:26.503 fused_ordering(275) 00:33:26.503 fused_ordering(276) 00:33:26.503 fused_ordering(277) 00:33:26.503 fused_ordering(278) 00:33:26.503 fused_ordering(279) 00:33:26.503 fused_ordering(280) 00:33:26.503 fused_ordering(281) 00:33:26.503 fused_ordering(282) 00:33:26.503 fused_ordering(283) 00:33:26.503 fused_ordering(284) 00:33:26.503 fused_ordering(285) 00:33:26.503 fused_ordering(286) 00:33:26.503 fused_ordering(287) 00:33:26.503 fused_ordering(288) 00:33:26.503 fused_ordering(289) 00:33:26.503 fused_ordering(290) 00:33:26.503 fused_ordering(291) 00:33:26.503 fused_ordering(292) 00:33:26.503 fused_ordering(293) 00:33:26.503 fused_ordering(294) 00:33:26.503 fused_ordering(295) 00:33:26.503 fused_ordering(296) 00:33:26.503 fused_ordering(297) 00:33:26.503 fused_ordering(298) 00:33:26.503 fused_ordering(299) 00:33:26.503 fused_ordering(300) 00:33:26.503 fused_ordering(301) 00:33:26.503 fused_ordering(302) 00:33:26.503 fused_ordering(303) 00:33:26.503 fused_ordering(304) 00:33:26.503 fused_ordering(305) 00:33:26.503 fused_ordering(306) 00:33:26.503 fused_ordering(307) 00:33:26.503 fused_ordering(308) 00:33:26.503 fused_ordering(309) 00:33:26.503 fused_ordering(310) 00:33:26.503 fused_ordering(311) 00:33:26.503 fused_ordering(312) 00:33:26.503 fused_ordering(313) 00:33:26.503 fused_ordering(314) 00:33:26.503 fused_ordering(315) 00:33:26.503 fused_ordering(316) 00:33:26.503 fused_ordering(317) 00:33:26.503 fused_ordering(318) 00:33:26.503 fused_ordering(319) 00:33:26.503 fused_ordering(320) 00:33:26.503 fused_ordering(321) 00:33:26.503 fused_ordering(322) 00:33:26.503 fused_ordering(323) 00:33:26.503 fused_ordering(324) 00:33:26.503 fused_ordering(325) 00:33:26.503 fused_ordering(326) 00:33:26.503 fused_ordering(327) 00:33:26.503 fused_ordering(328) 00:33:26.503 fused_ordering(329) 00:33:26.503 fused_ordering(330) 00:33:26.503 fused_ordering(331) 00:33:26.503 fused_ordering(332) 00:33:26.503 fused_ordering(333) 00:33:26.503 fused_ordering(334) 00:33:26.503 fused_ordering(335) 00:33:26.503 fused_ordering(336) 00:33:26.503 fused_ordering(337) 00:33:26.503 fused_ordering(338) 00:33:26.503 fused_ordering(339) 00:33:26.503 fused_ordering(340) 00:33:26.503 fused_ordering(341) 00:33:26.503 fused_ordering(342) 00:33:26.503 fused_ordering(343) 00:33:26.503 fused_ordering(344) 00:33:26.503 fused_ordering(345) 00:33:26.503 fused_ordering(346) 00:33:26.503 fused_ordering(347) 00:33:26.503 fused_ordering(348) 00:33:26.503 fused_ordering(349) 00:33:26.503 fused_ordering(350) 00:33:26.503 fused_ordering(351) 00:33:26.503 fused_ordering(352) 00:33:26.503 fused_ordering(353) 00:33:26.503 fused_ordering(354) 00:33:26.503 fused_ordering(355) 00:33:26.503 fused_ordering(356) 00:33:26.503 fused_ordering(357) 00:33:26.503 fused_ordering(358) 00:33:26.503 fused_ordering(359) 00:33:26.503 fused_ordering(360) 00:33:26.503 fused_ordering(361) 00:33:26.503 fused_ordering(362) 00:33:26.503 fused_ordering(363) 00:33:26.503 fused_ordering(364) 00:33:26.503 fused_ordering(365) 00:33:26.503 fused_ordering(366) 00:33:26.503 fused_ordering(367) 00:33:26.503 fused_ordering(368) 00:33:26.503 fused_ordering(369) 00:33:26.503 fused_ordering(370) 00:33:26.503 fused_ordering(371) 00:33:26.503 fused_ordering(372) 00:33:26.503 fused_ordering(373) 00:33:26.503 fused_ordering(374) 00:33:26.503 fused_ordering(375) 00:33:26.503 fused_ordering(376) 00:33:26.503 fused_ordering(377) 00:33:26.503 fused_ordering(378) 00:33:26.503 fused_ordering(379) 00:33:26.503 fused_ordering(380) 00:33:26.503 fused_ordering(381) 00:33:26.503 fused_ordering(382) 00:33:26.503 fused_ordering(383) 00:33:26.503 fused_ordering(384) 00:33:26.503 fused_ordering(385) 00:33:26.503 fused_ordering(386) 00:33:26.503 fused_ordering(387) 00:33:26.503 fused_ordering(388) 00:33:26.503 fused_ordering(389) 00:33:26.503 fused_ordering(390) 00:33:26.503 fused_ordering(391) 00:33:26.503 fused_ordering(392) 00:33:26.503 fused_ordering(393) 00:33:26.503 fused_ordering(394) 00:33:26.503 fused_ordering(395) 00:33:26.503 fused_ordering(396) 00:33:26.503 fused_ordering(397) 00:33:26.503 fused_ordering(398) 00:33:26.503 fused_ordering(399) 00:33:26.503 fused_ordering(400) 00:33:26.503 fused_ordering(401) 00:33:26.503 fused_ordering(402) 00:33:26.503 fused_ordering(403) 00:33:26.503 fused_ordering(404) 00:33:26.503 fused_ordering(405) 00:33:26.503 fused_ordering(406) 00:33:26.503 fused_ordering(407) 00:33:26.503 fused_ordering(408) 00:33:26.503 fused_ordering(409) 00:33:26.503 fused_ordering(410) 00:33:26.762 fused_ordering(411) 00:33:26.762 fused_ordering(412) 00:33:26.762 fused_ordering(413) 00:33:26.762 fused_ordering(414) 00:33:26.762 fused_ordering(415) 00:33:26.762 fused_ordering(416) 00:33:26.762 fused_ordering(417) 00:33:26.762 fused_ordering(418) 00:33:26.762 fused_ordering(419) 00:33:26.762 fused_ordering(420) 00:33:26.762 fused_ordering(421) 00:33:26.762 fused_ordering(422) 00:33:26.762 fused_ordering(423) 00:33:26.762 fused_ordering(424) 00:33:26.762 fused_ordering(425) 00:33:26.762 fused_ordering(426) 00:33:26.762 fused_ordering(427) 00:33:26.762 fused_ordering(428) 00:33:26.762 fused_ordering(429) 00:33:26.762 fused_ordering(430) 00:33:26.762 fused_ordering(431) 00:33:26.762 fused_ordering(432) 00:33:26.762 fused_ordering(433) 00:33:26.762 fused_ordering(434) 00:33:26.762 fused_ordering(435) 00:33:26.762 fused_ordering(436) 00:33:26.762 fused_ordering(437) 00:33:26.762 fused_ordering(438) 00:33:26.762 fused_ordering(439) 00:33:26.762 fused_ordering(440) 00:33:26.762 fused_ordering(441) 00:33:26.762 fused_ordering(442) 00:33:26.762 fused_ordering(443) 00:33:26.762 fused_ordering(444) 00:33:26.762 fused_ordering(445) 00:33:26.762 fused_ordering(446) 00:33:26.762 fused_ordering(447) 00:33:26.762 fused_ordering(448) 00:33:26.762 fused_ordering(449) 00:33:26.762 fused_ordering(450) 00:33:26.762 fused_ordering(451) 00:33:26.762 fused_ordering(452) 00:33:26.762 fused_ordering(453) 00:33:26.762 fused_ordering(454) 00:33:26.762 fused_ordering(455) 00:33:26.762 fused_ordering(456) 00:33:26.762 fused_ordering(457) 00:33:26.762 fused_ordering(458) 00:33:26.762 fused_ordering(459) 00:33:26.762 fused_ordering(460) 00:33:26.762 fused_ordering(461) 00:33:26.762 fused_ordering(462) 00:33:26.762 fused_ordering(463) 00:33:26.762 fused_ordering(464) 00:33:26.762 fused_ordering(465) 00:33:26.762 fused_ordering(466) 00:33:26.762 fused_ordering(467) 00:33:26.762 fused_ordering(468) 00:33:26.762 fused_ordering(469) 00:33:26.762 fused_ordering(470) 00:33:26.762 fused_ordering(471) 00:33:26.762 fused_ordering(472) 00:33:26.762 fused_ordering(473) 00:33:26.762 fused_ordering(474) 00:33:26.762 fused_ordering(475) 00:33:26.762 fused_ordering(476) 00:33:26.762 fused_ordering(477) 00:33:26.762 fused_ordering(478) 00:33:26.762 fused_ordering(479) 00:33:26.762 fused_ordering(480) 00:33:26.762 fused_ordering(481) 00:33:26.762 fused_ordering(482) 00:33:26.762 fused_ordering(483) 00:33:26.762 fused_ordering(484) 00:33:26.762 fused_ordering(485) 00:33:26.762 fused_ordering(486) 00:33:26.762 fused_ordering(487) 00:33:26.762 fused_ordering(488) 00:33:26.762 fused_ordering(489) 00:33:26.762 fused_ordering(490) 00:33:26.762 fused_ordering(491) 00:33:26.762 fused_ordering(492) 00:33:26.762 fused_ordering(493) 00:33:26.762 fused_ordering(494) 00:33:26.762 fused_ordering(495) 00:33:26.762 fused_ordering(496) 00:33:26.762 fused_ordering(497) 00:33:26.762 fused_ordering(498) 00:33:26.762 fused_ordering(499) 00:33:26.762 fused_ordering(500) 00:33:26.762 fused_ordering(501) 00:33:26.762 fused_ordering(502) 00:33:26.762 fused_ordering(503) 00:33:26.762 fused_ordering(504) 00:33:26.762 fused_ordering(505) 00:33:26.762 fused_ordering(506) 00:33:26.762 fused_ordering(507) 00:33:26.762 fused_ordering(508) 00:33:26.762 fused_ordering(509) 00:33:26.762 fused_ordering(510) 00:33:26.762 fused_ordering(511) 00:33:26.762 fused_ordering(512) 00:33:26.762 fused_ordering(513) 00:33:26.762 fused_ordering(514) 00:33:26.762 fused_ordering(515) 00:33:26.762 fused_ordering(516) 00:33:26.762 fused_ordering(517) 00:33:26.762 fused_ordering(518) 00:33:26.762 fused_ordering(519) 00:33:26.762 fused_ordering(520) 00:33:26.762 fused_ordering(521) 00:33:26.762 fused_ordering(522) 00:33:26.762 fused_ordering(523) 00:33:26.762 fused_ordering(524) 00:33:26.762 fused_ordering(525) 00:33:26.762 fused_ordering(526) 00:33:26.762 fused_ordering(527) 00:33:26.762 fused_ordering(528) 00:33:26.762 fused_ordering(529) 00:33:26.762 fused_ordering(530) 00:33:26.762 fused_ordering(531) 00:33:26.762 fused_ordering(532) 00:33:26.762 fused_ordering(533) 00:33:26.762 fused_ordering(534) 00:33:26.762 fused_ordering(535) 00:33:26.762 fused_ordering(536) 00:33:26.762 fused_ordering(537) 00:33:26.762 fused_ordering(538) 00:33:26.762 fused_ordering(539) 00:33:26.762 fused_ordering(540) 00:33:26.762 fused_ordering(541) 00:33:26.762 fused_ordering(542) 00:33:26.762 fused_ordering(543) 00:33:26.762 fused_ordering(544) 00:33:26.762 fused_ordering(545) 00:33:26.762 fused_ordering(546) 00:33:26.762 fused_ordering(547) 00:33:26.762 fused_ordering(548) 00:33:26.762 fused_ordering(549) 00:33:26.763 fused_ordering(550) 00:33:26.763 fused_ordering(551) 00:33:26.763 fused_ordering(552) 00:33:26.763 fused_ordering(553) 00:33:26.763 fused_ordering(554) 00:33:26.763 fused_ordering(555) 00:33:26.763 fused_ordering(556) 00:33:26.763 fused_ordering(557) 00:33:26.763 fused_ordering(558) 00:33:26.763 fused_ordering(559) 00:33:26.763 fused_ordering(560) 00:33:26.763 fused_ordering(561) 00:33:26.763 fused_ordering(562) 00:33:26.763 fused_ordering(563) 00:33:26.763 fused_ordering(564) 00:33:26.763 fused_ordering(565) 00:33:26.763 fused_ordering(566) 00:33:26.763 fused_ordering(567) 00:33:26.763 fused_ordering(568) 00:33:26.763 fused_ordering(569) 00:33:26.763 fused_ordering(570) 00:33:26.763 fused_ordering(571) 00:33:26.763 fused_ordering(572) 00:33:26.763 fused_ordering(573) 00:33:26.763 fused_ordering(574) 00:33:26.763 fused_ordering(575) 00:33:26.763 fused_ordering(576) 00:33:26.763 fused_ordering(577) 00:33:26.763 fused_ordering(578) 00:33:26.763 fused_ordering(579) 00:33:26.763 fused_ordering(580) 00:33:26.763 fused_ordering(581) 00:33:26.763 fused_ordering(582) 00:33:26.763 fused_ordering(583) 00:33:26.763 fused_ordering(584) 00:33:26.763 fused_ordering(585) 00:33:26.763 fused_ordering(586) 00:33:26.763 fused_ordering(587) 00:33:26.763 fused_ordering(588) 00:33:26.763 fused_ordering(589) 00:33:26.763 fused_ordering(590) 00:33:26.763 fused_ordering(591) 00:33:26.763 fused_ordering(592) 00:33:26.763 fused_ordering(593) 00:33:26.763 fused_ordering(594) 00:33:26.763 fused_ordering(595) 00:33:26.763 fused_ordering(596) 00:33:26.763 fused_ordering(597) 00:33:26.763 fused_ordering(598) 00:33:26.763 fused_ordering(599) 00:33:26.763 fused_ordering(600) 00:33:26.763 fused_ordering(601) 00:33:26.763 fused_ordering(602) 00:33:26.763 fused_ordering(603) 00:33:26.763 fused_ordering(604) 00:33:26.763 fused_ordering(605) 00:33:26.763 fused_ordering(606) 00:33:26.763 fused_ordering(607) 00:33:26.763 fused_ordering(608) 00:33:26.763 fused_ordering(609) 00:33:26.763 fused_ordering(610) 00:33:26.763 fused_ordering(611) 00:33:26.763 fused_ordering(612) 00:33:26.763 fused_ordering(613) 00:33:26.763 fused_ordering(614) 00:33:26.763 fused_ordering(615) 00:33:27.020 fused_ordering(616) 00:33:27.020 fused_ordering(617) 00:33:27.020 fused_ordering(618) 00:33:27.020 fused_ordering(619) 00:33:27.020 fused_ordering(620) 00:33:27.020 fused_ordering(621) 00:33:27.020 fused_ordering(622) 00:33:27.020 fused_ordering(623) 00:33:27.020 fused_ordering(624) 00:33:27.020 fused_ordering(625) 00:33:27.020 fused_ordering(626) 00:33:27.020 fused_ordering(627) 00:33:27.020 fused_ordering(628) 00:33:27.020 fused_ordering(629) 00:33:27.020 fused_ordering(630) 00:33:27.020 fused_ordering(631) 00:33:27.020 fused_ordering(632) 00:33:27.020 fused_ordering(633) 00:33:27.020 fused_ordering(634) 00:33:27.020 fused_ordering(635) 00:33:27.020 fused_ordering(636) 00:33:27.020 fused_ordering(637) 00:33:27.020 fused_ordering(638) 00:33:27.020 fused_ordering(639) 00:33:27.020 fused_ordering(640) 00:33:27.020 fused_ordering(641) 00:33:27.020 fused_ordering(642) 00:33:27.020 fused_ordering(643) 00:33:27.020 fused_ordering(644) 00:33:27.020 fused_ordering(645) 00:33:27.020 fused_ordering(646) 00:33:27.020 fused_ordering(647) 00:33:27.020 fused_ordering(648) 00:33:27.020 fused_ordering(649) 00:33:27.020 fused_ordering(650) 00:33:27.020 fused_ordering(651) 00:33:27.020 fused_ordering(652) 00:33:27.020 fused_ordering(653) 00:33:27.020 fused_ordering(654) 00:33:27.020 fused_ordering(655) 00:33:27.020 fused_ordering(656) 00:33:27.020 fused_ordering(657) 00:33:27.020 fused_ordering(658) 00:33:27.020 fused_ordering(659) 00:33:27.020 fused_ordering(660) 00:33:27.020 fused_ordering(661) 00:33:27.020 fused_ordering(662) 00:33:27.020 fused_ordering(663) 00:33:27.020 fused_ordering(664) 00:33:27.021 fused_ordering(665) 00:33:27.021 fused_ordering(666) 00:33:27.021 fused_ordering(667) 00:33:27.021 fused_ordering(668) 00:33:27.021 fused_ordering(669) 00:33:27.021 fused_ordering(670) 00:33:27.021 fused_ordering(671) 00:33:27.021 fused_ordering(672) 00:33:27.021 fused_ordering(673) 00:33:27.021 fused_ordering(674) 00:33:27.021 fused_ordering(675) 00:33:27.021 fused_ordering(676) 00:33:27.021 fused_ordering(677) 00:33:27.021 fused_ordering(678) 00:33:27.021 fused_ordering(679) 00:33:27.021 fused_ordering(680) 00:33:27.021 fused_ordering(681) 00:33:27.021 fused_ordering(682) 00:33:27.021 fused_ordering(683) 00:33:27.021 fused_ordering(684) 00:33:27.021 fused_ordering(685) 00:33:27.021 fused_ordering(686) 00:33:27.021 fused_ordering(687) 00:33:27.021 fused_ordering(688) 00:33:27.021 fused_ordering(689) 00:33:27.021 fused_ordering(690) 00:33:27.021 fused_ordering(691) 00:33:27.021 fused_ordering(692) 00:33:27.021 fused_ordering(693) 00:33:27.021 fused_ordering(694) 00:33:27.021 fused_ordering(695) 00:33:27.021 fused_ordering(696) 00:33:27.021 fused_ordering(697) 00:33:27.021 fused_ordering(698) 00:33:27.021 fused_ordering(699) 00:33:27.021 fused_ordering(700) 00:33:27.021 fused_ordering(701) 00:33:27.021 fused_ordering(702) 00:33:27.021 fused_ordering(703) 00:33:27.021 fused_ordering(704) 00:33:27.021 fused_ordering(705) 00:33:27.021 fused_ordering(706) 00:33:27.021 fused_ordering(707) 00:33:27.021 fused_ordering(708) 00:33:27.021 fused_ordering(709) 00:33:27.021 fused_ordering(710) 00:33:27.021 fused_ordering(711) 00:33:27.021 fused_ordering(712) 00:33:27.021 fused_ordering(713) 00:33:27.021 fused_ordering(714) 00:33:27.021 fused_ordering(715) 00:33:27.021 fused_ordering(716) 00:33:27.021 fused_ordering(717) 00:33:27.021 fused_ordering(718) 00:33:27.021 fused_ordering(719) 00:33:27.021 fused_ordering(720) 00:33:27.021 fused_ordering(721) 00:33:27.021 fused_ordering(722) 00:33:27.021 fused_ordering(723) 00:33:27.021 fused_ordering(724) 00:33:27.021 fused_ordering(725) 00:33:27.021 fused_ordering(726) 00:33:27.021 fused_ordering(727) 00:33:27.021 fused_ordering(728) 00:33:27.021 fused_ordering(729) 00:33:27.021 fused_ordering(730) 00:33:27.021 fused_ordering(731) 00:33:27.021 fused_ordering(732) 00:33:27.021 fused_ordering(733) 00:33:27.021 fused_ordering(734) 00:33:27.021 fused_ordering(735) 00:33:27.021 fused_ordering(736) 00:33:27.021 fused_ordering(737) 00:33:27.021 fused_ordering(738) 00:33:27.021 fused_ordering(739) 00:33:27.021 fused_ordering(740) 00:33:27.021 fused_ordering(741) 00:33:27.021 fused_ordering(742) 00:33:27.021 fused_ordering(743) 00:33:27.021 fused_ordering(744) 00:33:27.021 fused_ordering(745) 00:33:27.021 fused_ordering(746) 00:33:27.021 fused_ordering(747) 00:33:27.021 fused_ordering(748) 00:33:27.021 fused_ordering(749) 00:33:27.021 fused_ordering(750) 00:33:27.021 fused_ordering(751) 00:33:27.021 fused_ordering(752) 00:33:27.021 fused_ordering(753) 00:33:27.021 fused_ordering(754) 00:33:27.021 fused_ordering(755) 00:33:27.021 fused_ordering(756) 00:33:27.021 fused_ordering(757) 00:33:27.021 fused_ordering(758) 00:33:27.021 fused_ordering(759) 00:33:27.021 fused_ordering(760) 00:33:27.021 fused_ordering(761) 00:33:27.021 fused_ordering(762) 00:33:27.021 fused_ordering(763) 00:33:27.021 fused_ordering(764) 00:33:27.021 fused_ordering(765) 00:33:27.021 fused_ordering(766) 00:33:27.021 fused_ordering(767) 00:33:27.021 fused_ordering(768) 00:33:27.021 fused_ordering(769) 00:33:27.021 fused_ordering(770) 00:33:27.021 fused_ordering(771) 00:33:27.021 fused_ordering(772) 00:33:27.021 fused_ordering(773) 00:33:27.021 fused_ordering(774) 00:33:27.021 fused_ordering(775) 00:33:27.021 fused_ordering(776) 00:33:27.021 fused_ordering(777) 00:33:27.021 fused_ordering(778) 00:33:27.021 fused_ordering(779) 00:33:27.021 fused_ordering(780) 00:33:27.021 fused_ordering(781) 00:33:27.021 fused_ordering(782) 00:33:27.021 fused_ordering(783) 00:33:27.021 fused_ordering(784) 00:33:27.021 fused_ordering(785) 00:33:27.021 fused_ordering(786) 00:33:27.021 fused_ordering(787) 00:33:27.021 fused_ordering(788) 00:33:27.021 fused_ordering(789) 00:33:27.021 fused_ordering(790) 00:33:27.021 fused_ordering(791) 00:33:27.021 fused_ordering(792) 00:33:27.021 fused_ordering(793) 00:33:27.021 fused_ordering(794) 00:33:27.021 fused_ordering(795) 00:33:27.021 fused_ordering(796) 00:33:27.021 fused_ordering(797) 00:33:27.021 fused_ordering(798) 00:33:27.021 fused_ordering(799) 00:33:27.021 fused_ordering(800) 00:33:27.021 fused_ordering(801) 00:33:27.021 fused_ordering(802) 00:33:27.021 fused_ordering(803) 00:33:27.021 fused_ordering(804) 00:33:27.021 fused_ordering(805) 00:33:27.021 fused_ordering(806) 00:33:27.021 fused_ordering(807) 00:33:27.021 fused_ordering(808) 00:33:27.021 fused_ordering(809) 00:33:27.021 fused_ordering(810) 00:33:27.021 fused_ordering(811) 00:33:27.021 fused_ordering(812) 00:33:27.021 fused_ordering(813) 00:33:27.021 fused_ordering(814) 00:33:27.021 fused_ordering(815) 00:33:27.021 fused_ordering(816) 00:33:27.021 fused_ordering(817) 00:33:27.021 fused_ordering(818) 00:33:27.021 fused_ordering(819) 00:33:27.021 fused_ordering(820) 00:33:27.586 fused_ordering(821) 00:33:27.586 fused_ordering(822) 00:33:27.586 fused_ordering(823) 00:33:27.586 fused_ordering(824) 00:33:27.586 fused_ordering(825) 00:33:27.586 fused_ordering(826) 00:33:27.586 fused_ordering(827) 00:33:27.586 fused_ordering(828) 00:33:27.586 fused_ordering(829) 00:33:27.586 fused_ordering(830) 00:33:27.586 fused_ordering(831) 00:33:27.586 fused_ordering(832) 00:33:27.586 fused_ordering(833) 00:33:27.586 fused_ordering(834) 00:33:27.586 fused_ordering(835) 00:33:27.586 fused_ordering(836) 00:33:27.586 fused_ordering(837) 00:33:27.586 fused_ordering(838) 00:33:27.586 fused_ordering(839) 00:33:27.586 fused_ordering(840) 00:33:27.586 fused_ordering(841) 00:33:27.586 fused_ordering(842) 00:33:27.586 fused_ordering(843) 00:33:27.586 fused_ordering(844) 00:33:27.586 fused_ordering(845) 00:33:27.586 fused_ordering(846) 00:33:27.586 fused_ordering(847) 00:33:27.586 fused_ordering(848) 00:33:27.586 fused_ordering(849) 00:33:27.586 fused_ordering(850) 00:33:27.586 fused_ordering(851) 00:33:27.586 fused_ordering(852) 00:33:27.586 fused_ordering(853) 00:33:27.586 fused_ordering(854) 00:33:27.586 fused_ordering(855) 00:33:27.586 fused_ordering(856) 00:33:27.586 fused_ordering(857) 00:33:27.586 fused_ordering(858) 00:33:27.586 fused_ordering(859) 00:33:27.586 fused_ordering(860) 00:33:27.586 fused_ordering(861) 00:33:27.586 fused_ordering(862) 00:33:27.586 fused_ordering(863) 00:33:27.586 fused_ordering(864) 00:33:27.586 fused_ordering(865) 00:33:27.586 fused_ordering(866) 00:33:27.586 fused_ordering(867) 00:33:27.586 fused_ordering(868) 00:33:27.586 fused_ordering(869) 00:33:27.586 fused_ordering(870) 00:33:27.586 fused_ordering(871) 00:33:27.586 fused_ordering(872) 00:33:27.586 fused_ordering(873) 00:33:27.586 fused_ordering(874) 00:33:27.586 fused_ordering(875) 00:33:27.586 fused_ordering(876) 00:33:27.586 fused_ordering(877) 00:33:27.586 fused_ordering(878) 00:33:27.586 fused_ordering(879) 00:33:27.586 fused_ordering(880) 00:33:27.586 fused_ordering(881) 00:33:27.586 fused_ordering(882) 00:33:27.586 fused_ordering(883) 00:33:27.586 fused_ordering(884) 00:33:27.586 fused_ordering(885) 00:33:27.586 fused_ordering(886) 00:33:27.586 fused_ordering(887) 00:33:27.586 fused_ordering(888) 00:33:27.586 fused_ordering(889) 00:33:27.586 fused_ordering(890) 00:33:27.586 fused_ordering(891) 00:33:27.586 fused_ordering(892) 00:33:27.586 fused_ordering(893) 00:33:27.586 fused_ordering(894) 00:33:27.586 fused_ordering(895) 00:33:27.586 fused_ordering(896) 00:33:27.586 fused_ordering(897) 00:33:27.586 fused_ordering(898) 00:33:27.586 fused_ordering(899) 00:33:27.586 fused_ordering(900) 00:33:27.586 fused_ordering(901) 00:33:27.586 fused_ordering(902) 00:33:27.586 fused_ordering(903) 00:33:27.586 fused_ordering(904) 00:33:27.586 fused_ordering(905) 00:33:27.586 fused_ordering(906) 00:33:27.586 fused_ordering(907) 00:33:27.586 fused_ordering(908) 00:33:27.586 fused_ordering(909) 00:33:27.586 fused_ordering(910) 00:33:27.586 fused_ordering(911) 00:33:27.586 fused_ordering(912) 00:33:27.586 fused_ordering(913) 00:33:27.586 fused_ordering(914) 00:33:27.586 fused_ordering(915) 00:33:27.586 fused_ordering(916) 00:33:27.586 fused_ordering(917) 00:33:27.586 fused_ordering(918) 00:33:27.586 fused_ordering(919) 00:33:27.586 fused_ordering(920) 00:33:27.586 fused_ordering(921) 00:33:27.586 fused_ordering(922) 00:33:27.586 fused_ordering(923) 00:33:27.586 fused_ordering(924) 00:33:27.586 fused_ordering(925) 00:33:27.586 fused_ordering(926) 00:33:27.586 fused_ordering(927) 00:33:27.586 fused_ordering(928) 00:33:27.587 fused_ordering(929) 00:33:27.587 fused_ordering(930) 00:33:27.587 fused_ordering(931) 00:33:27.587 fused_ordering(932) 00:33:27.587 fused_ordering(933) 00:33:27.587 fused_ordering(934) 00:33:27.587 fused_ordering(935) 00:33:27.587 fused_ordering(936) 00:33:27.587 fused_ordering(937) 00:33:27.587 fused_ordering(938) 00:33:27.587 fused_ordering(939) 00:33:27.587 fused_ordering(940) 00:33:27.587 fused_ordering(941) 00:33:27.587 fused_ordering(942) 00:33:27.587 fused_ordering(943) 00:33:27.587 fused_ordering(944) 00:33:27.587 fused_ordering(945) 00:33:27.587 fused_ordering(946) 00:33:27.587 fused_ordering(947) 00:33:27.587 fused_ordering(948) 00:33:27.587 fused_ordering(949) 00:33:27.587 fused_ordering(950) 00:33:27.587 fused_ordering(951) 00:33:27.587 fused_ordering(952) 00:33:27.587 fused_ordering(953) 00:33:27.587 fused_ordering(954) 00:33:27.587 fused_ordering(955) 00:33:27.587 fused_ordering(956) 00:33:27.587 fused_ordering(957) 00:33:27.587 fused_ordering(958) 00:33:27.587 fused_ordering(959) 00:33:27.587 fused_ordering(960) 00:33:27.587 fused_ordering(961) 00:33:27.587 fused_ordering(962) 00:33:27.587 fused_ordering(963) 00:33:27.587 fused_ordering(964) 00:33:27.587 fused_ordering(965) 00:33:27.587 fused_ordering(966) 00:33:27.587 fused_ordering(967) 00:33:27.587 fused_ordering(968) 00:33:27.587 fused_ordering(969) 00:33:27.587 fused_ordering(970) 00:33:27.587 fused_ordering(971) 00:33:27.587 fused_ordering(972) 00:33:27.587 fused_ordering(973) 00:33:27.587 fused_ordering(974) 00:33:27.587 fused_ordering(975) 00:33:27.587 fused_ordering(976) 00:33:27.587 fused_ordering(977) 00:33:27.587 fused_ordering(978) 00:33:27.587 fused_ordering(979) 00:33:27.587 fused_ordering(980) 00:33:27.587 fused_ordering(981) 00:33:27.587 fused_ordering(982) 00:33:27.587 fused_ordering(983) 00:33:27.587 fused_ordering(984) 00:33:27.587 fused_ordering(985) 00:33:27.587 fused_ordering(986) 00:33:27.587 fused_ordering(987) 00:33:27.587 fused_ordering(988) 00:33:27.587 fused_ordering(989) 00:33:27.587 fused_ordering(990) 00:33:27.587 fused_ordering(991) 00:33:27.587 fused_ordering(992) 00:33:27.587 fused_ordering(993) 00:33:27.587 fused_ordering(994) 00:33:27.587 fused_ordering(995) 00:33:27.587 fused_ordering(996) 00:33:27.587 fused_ordering(997) 00:33:27.587 fused_ordering(998) 00:33:27.587 fused_ordering(999) 00:33:27.587 fused_ordering(1000) 00:33:27.587 fused_ordering(1001) 00:33:27.587 fused_ordering(1002) 00:33:27.587 fused_ordering(1003) 00:33:27.587 fused_ordering(1004) 00:33:27.587 fused_ordering(1005) 00:33:27.587 fused_ordering(1006) 00:33:27.587 fused_ordering(1007) 00:33:27.587 fused_ordering(1008) 00:33:27.587 fused_ordering(1009) 00:33:27.587 fused_ordering(1010) 00:33:27.587 fused_ordering(1011) 00:33:27.587 fused_ordering(1012) 00:33:27.587 fused_ordering(1013) 00:33:27.587 fused_ordering(1014) 00:33:27.587 fused_ordering(1015) 00:33:27.587 fused_ordering(1016) 00:33:27.587 fused_ordering(1017) 00:33:27.587 fused_ordering(1018) 00:33:27.587 fused_ordering(1019) 00:33:27.587 fused_ordering(1020) 00:33:27.587 fused_ordering(1021) 00:33:27.587 fused_ordering(1022) 00:33:27.587 fused_ordering(1023) 00:33:27.587 08:30:00 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:33:27.587 08:30:00 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:33:27.587 08:30:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:27.587 08:30:00 -- nvmf/common.sh@116 -- # sync 00:33:27.587 08:30:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:27.587 08:30:00 -- nvmf/common.sh@119 -- # set +e 00:33:27.587 08:30:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:27.587 08:30:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:27.587 rmmod nvme_tcp 00:33:27.587 rmmod nvme_fabrics 00:33:27.587 rmmod nvme_keyring 00:33:27.587 08:30:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:27.587 08:30:00 -- nvmf/common.sh@123 -- # set -e 00:33:27.587 08:30:00 -- nvmf/common.sh@124 -- # return 0 00:33:27.587 08:30:00 -- nvmf/common.sh@477 -- # '[' -n 68814 ']' 00:33:27.587 08:30:00 -- nvmf/common.sh@478 -- # killprocess 68814 00:33:27.587 08:30:00 -- common/autotest_common.sh@926 -- # '[' -z 68814 ']' 00:33:27.587 08:30:00 -- common/autotest_common.sh@930 -- # kill -0 68814 00:33:27.587 08:30:00 -- common/autotest_common.sh@931 -- # uname 00:33:27.587 08:30:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:27.587 08:30:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68814 00:33:27.587 08:30:00 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:33:27.587 08:30:00 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:33:27.587 killing process with pid 68814 00:33:27.587 08:30:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68814' 00:33:27.587 08:30:00 -- common/autotest_common.sh@945 -- # kill 68814 00:33:27.587 08:30:00 -- common/autotest_common.sh@950 -- # wait 68814 00:33:27.844 08:30:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:33:27.844 08:30:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:27.844 08:30:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:27.844 08:30:01 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:27.844 08:30:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:27.844 08:30:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:27.844 08:30:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:27.844 08:30:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:27.844 08:30:01 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:33:27.844 00:33:27.844 real 0m3.572s 00:33:27.844 user 0m4.102s 00:33:27.844 sys 0m1.165s 00:33:27.844 08:30:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:27.844 08:30:01 -- common/autotest_common.sh@10 -- # set +x 00:33:27.844 ************************************ 00:33:27.844 END TEST nvmf_fused_ordering 00:33:27.844 ************************************ 00:33:27.844 08:30:01 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:33:27.844 08:30:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:27.844 08:30:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:27.844 08:30:01 -- common/autotest_common.sh@10 -- # set +x 00:33:27.844 ************************************ 00:33:27.844 START TEST nvmf_delete_subsystem 00:33:27.844 ************************************ 00:33:27.845 08:30:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:33:28.102 * Looking for test storage... 00:33:28.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:28.102 08:30:01 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:28.102 08:30:01 -- nvmf/common.sh@7 -- # uname -s 00:33:28.102 08:30:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:28.102 08:30:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:28.102 08:30:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:28.102 08:30:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:28.102 08:30:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:28.102 08:30:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:28.102 08:30:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:28.102 08:30:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:28.102 08:30:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:28.102 08:30:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:28.102 08:30:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:33:28.102 08:30:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:33:28.102 08:30:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:28.102 08:30:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:28.102 08:30:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:28.102 08:30:01 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:28.102 08:30:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:28.102 08:30:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:28.102 08:30:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:28.102 08:30:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.102 08:30:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.102 08:30:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.102 08:30:01 -- paths/export.sh@5 -- # export PATH 00:33:28.103 08:30:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.103 08:30:01 -- nvmf/common.sh@46 -- # : 0 00:33:28.103 08:30:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:28.103 08:30:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:28.103 08:30:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:28.103 08:30:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:28.103 08:30:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:28.103 08:30:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:28.103 08:30:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:28.103 08:30:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:28.103 08:30:01 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:33:28.103 08:30:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:28.103 08:30:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:28.103 08:30:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:28.103 08:30:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:28.103 08:30:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:28.103 08:30:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:28.103 08:30:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:28.103 08:30:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:28.103 08:30:01 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:33:28.103 08:30:01 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:33:28.103 08:30:01 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:33:28.103 08:30:01 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:33:28.103 08:30:01 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:33:28.103 08:30:01 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:33:28.103 08:30:01 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:28.103 08:30:01 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:28.103 08:30:01 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:33:28.103 08:30:01 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:33:28.103 08:30:01 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:28.103 08:30:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:28.103 08:30:01 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:28.103 08:30:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:28.103 08:30:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:28.103 08:30:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:28.103 08:30:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:28.103 08:30:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:28.103 08:30:01 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:33:28.103 08:30:01 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:33:28.103 Cannot find device "nvmf_tgt_br" 00:33:28.103 08:30:01 -- nvmf/common.sh@154 -- # true 00:33:28.103 08:30:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:33:28.103 Cannot find device "nvmf_tgt_br2" 00:33:28.103 08:30:01 -- nvmf/common.sh@155 -- # true 00:33:28.103 08:30:01 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:33:28.103 08:30:01 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:33:28.103 Cannot find device "nvmf_tgt_br" 00:33:28.103 08:30:01 -- nvmf/common.sh@157 -- # true 00:33:28.103 08:30:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:33:28.103 Cannot find device "nvmf_tgt_br2" 00:33:28.103 08:30:01 -- nvmf/common.sh@158 -- # true 00:33:28.103 08:30:01 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:33:28.103 08:30:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:33:28.103 08:30:01 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:28.103 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:28.103 08:30:01 -- nvmf/common.sh@161 -- # true 00:33:28.103 08:30:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:28.103 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:28.103 08:30:01 -- nvmf/common.sh@162 -- # true 00:33:28.103 08:30:01 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:33:28.360 08:30:01 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:28.360 08:30:01 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:28.360 08:30:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:28.360 08:30:01 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:28.360 08:30:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:28.360 08:30:01 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:28.360 08:30:01 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:33:28.360 08:30:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:33:28.360 08:30:01 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:33:28.360 08:30:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:33:28.360 08:30:01 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:33:28.360 08:30:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:33:28.360 08:30:01 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:28.360 08:30:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:28.360 08:30:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:28.360 08:30:01 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:33:28.360 08:30:01 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:33:28.360 08:30:01 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:33:28.360 08:30:01 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:28.360 08:30:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:28.360 08:30:01 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:28.360 08:30:01 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:28.360 08:30:01 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:33:28.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:28.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:33:28.360 00:33:28.360 --- 10.0.0.2 ping statistics --- 00:33:28.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:28.360 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:33:28.360 08:30:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:33:28.360 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:28.360 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:33:28.360 00:33:28.360 --- 10.0.0.3 ping statistics --- 00:33:28.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:28.360 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:33:28.360 08:30:01 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:28.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:28.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:33:28.360 00:33:28.360 --- 10.0.0.1 ping statistics --- 00:33:28.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:28.360 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:33:28.360 08:30:01 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:28.360 08:30:01 -- nvmf/common.sh@421 -- # return 0 00:33:28.360 08:30:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:33:28.360 08:30:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:28.360 08:30:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:28.360 08:30:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:28.360 08:30:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:28.360 08:30:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:28.360 08:30:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:28.360 08:30:01 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:33:28.361 08:30:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:28.361 08:30:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:28.361 08:30:01 -- common/autotest_common.sh@10 -- # set +x 00:33:28.361 08:30:01 -- nvmf/common.sh@469 -- # nvmfpid=69039 00:33:28.361 08:30:01 -- nvmf/common.sh@470 -- # waitforlisten 69039 00:33:28.361 08:30:01 -- common/autotest_common.sh@819 -- # '[' -z 69039 ']' 00:33:28.361 08:30:01 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:33:28.361 08:30:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:28.361 08:30:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:28.361 08:30:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:28.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:28.361 08:30:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:28.361 08:30:01 -- common/autotest_common.sh@10 -- # set +x 00:33:28.361 [2024-04-17 08:30:01.666610] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:33:28.361 [2024-04-17 08:30:01.666688] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:28.618 [2024-04-17 08:30:01.808163] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:28.618 [2024-04-17 08:30:01.912549] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:28.618 [2024-04-17 08:30:01.912678] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:28.618 [2024-04-17 08:30:01.912686] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:28.618 [2024-04-17 08:30:01.912692] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:28.618 [2024-04-17 08:30:01.912881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:28.618 [2024-04-17 08:30:01.912880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:29.553 08:30:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:29.553 08:30:02 -- common/autotest_common.sh@852 -- # return 0 00:33:29.553 08:30:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:29.553 08:30:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:29.553 08:30:02 -- common/autotest_common.sh@10 -- # set +x 00:33:29.553 08:30:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:29.553 08:30:02 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:29.553 08:30:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:29.553 08:30:02 -- common/autotest_common.sh@10 -- # set +x 00:33:29.553 [2024-04-17 08:30:02.617023] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:29.553 08:30:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:29.553 08:30:02 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:29.553 08:30:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:29.553 08:30:02 -- common/autotest_common.sh@10 -- # set +x 00:33:29.553 08:30:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:29.553 08:30:02 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:29.553 08:30:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:29.553 08:30:02 -- common/autotest_common.sh@10 -- # set +x 00:33:29.553 [2024-04-17 08:30:02.633176] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:29.553 08:30:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:29.553 08:30:02 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:33:29.553 08:30:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:29.553 08:30:02 -- common/autotest_common.sh@10 -- # set +x 00:33:29.553 NULL1 00:33:29.553 08:30:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:29.553 08:30:02 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:29.553 08:30:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:29.553 08:30:02 -- common/autotest_common.sh@10 -- # set +x 00:33:29.553 Delay0 00:33:29.553 08:30:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:29.553 08:30:02 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:29.553 08:30:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:29.553 08:30:02 -- common/autotest_common.sh@10 -- # set +x 00:33:29.553 08:30:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:29.553 08:30:02 -- target/delete_subsystem.sh@28 -- # perf_pid=69091 00:33:29.553 08:30:02 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:33:29.553 08:30:02 -- target/delete_subsystem.sh@30 -- # sleep 2 00:33:29.553 [2024-04-17 08:30:02.827275] subsystem.c:1304:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:33:31.456 08:30:04 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:31.456 08:30:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:31.456 08:30:04 -- common/autotest_common.sh@10 -- # set +x 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 starting I/O failed: -6 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 starting I/O failed: -6 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 starting I/O failed: -6 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 starting I/O failed: -6 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 starting I/O failed: -6 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 starting I/O failed: -6 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 starting I/O failed: -6 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 starting I/O failed: -6 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 starting I/O failed: -6 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 starting I/O failed: -6 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 starting I/O failed: -6 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 starting I/O failed: -6 00:33:31.716 [2024-04-17 08:30:04.862123] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04c10 is same with the state(5) to be set 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 [2024-04-17 08:30:04.862653] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5840 is same with the state(5) to be set 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 starting I/O failed: -6 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 starting I/O failed: -6 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 starting I/O failed: -6 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 starting I/O failed: -6 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 starting I/O failed: -6 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 starting I/O failed: -6 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 starting I/O failed: -6 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 starting I/O failed: -6 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 starting I/O failed: -6 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.716 Read completed with error (sct=0, sc=8) 00:33:31.716 starting I/O failed: -6 00:33:31.716 Write completed with error (sct=0, sc=8) 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 starting I/O failed: -6 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 Write completed with error (sct=0, sc=8) 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 Write completed with error (sct=0, sc=8) 00:33:31.717 starting I/O failed: -6 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 starting I/O failed: -6 00:33:31.717 Write completed with error (sct=0, sc=8) 00:33:31.717 Write completed with error (sct=0, sc=8) 00:33:31.717 starting I/O failed: -6 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 starting I/O failed: -6 00:33:31.717 Write completed with error (sct=0, sc=8) 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 starting I/O failed: -6 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 starting I/O failed: -6 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 starting I/O failed: -6 00:33:31.717 Write completed with error (sct=0, sc=8) 00:33:31.717 Write completed with error (sct=0, sc=8) 00:33:31.717 starting I/O failed: -6 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 starting I/O failed: -6 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 starting I/O failed: -6 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 starting I/O failed: -6 00:33:31.717 Write completed with error (sct=0, sc=8) 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 starting I/O failed: -6 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 Write completed with error (sct=0, sc=8) 00:33:31.717 starting I/O failed: -6 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 starting I/O failed: -6 00:33:31.717 Write completed with error (sct=0, sc=8) 00:33:31.717 Write completed with error (sct=0, sc=8) 00:33:31.717 starting I/O failed: -6 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 starting I/O failed: -6 00:33:31.717 Write completed with error (sct=0, sc=8) 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 starting I/O failed: -6 00:33:31.717 Write completed with error (sct=0, sc=8) 00:33:31.717 Write completed with error (sct=0, sc=8) 00:33:31.717 starting I/O failed: -6 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 Write completed with error (sct=0, sc=8) 00:33:31.717 starting I/O failed: -6 00:33:31.717 Write completed with error (sct=0, sc=8) 00:33:31.717 Write completed with error (sct=0, sc=8) 00:33:31.717 starting I/O failed: -6 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 starting I/O failed: -6 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 Write completed with error (sct=0, sc=8) 00:33:31.717 starting I/O failed: -6 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 starting I/O failed: -6 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 starting I/O failed: -6 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 starting I/O failed: -6 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 Write completed with error (sct=0, sc=8) 00:33:31.717 starting I/O failed: -6 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 starting I/O failed: -6 00:33:31.717 Write completed with error (sct=0, sc=8) 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 starting I/O failed: -6 00:33:31.717 Read completed with error (sct=0, sc=8) 00:33:31.717 starting I/O failed: -6 00:33:31.717 starting I/O failed: -6 00:33:31.717 starting I/O failed: -6 00:33:31.717 starting I/O failed: -6 00:33:31.717 starting I/O failed: -6 00:33:31.717 starting I/O failed: -6 00:33:32.654 [2024-04-17 08:30:05.841212] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03f80 is same with the state(5) to be set 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 [2024-04-17 08:30:05.861664] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03080 is same with the state(5) to be set 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 [2024-04-17 08:30:05.861895] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5af0 is same with the state(5) to be set 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 [2024-04-17 08:30:05.862372] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f54c400bf20 is same with the state(5) to be set 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Write completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 Read completed with error (sct=0, sc=8) 00:33:32.654 [2024-04-17 08:30:05.862671] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f54c400c600 is same with the state(5) to be set 00:33:32.654 [2024-04-17 08:30:05.863650] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa03f80 (9): Bad file descriptor 00:33:32.654 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:33:32.654 08:30:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:32.654 08:30:05 -- target/delete_subsystem.sh@34 -- # delay=0 00:33:32.654 08:30:05 -- target/delete_subsystem.sh@35 -- # kill -0 69091 00:33:32.654 08:30:05 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:33:32.654 Initializing NVMe Controllers 00:33:32.654 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:32.654 Controller IO queue size 128, less than required. 00:33:32.654 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:32.655 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:32.655 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:32.655 Initialization complete. Launching workers. 00:33:32.655 ======================================================== 00:33:32.655 Latency(us) 00:33:32.655 Device Information : IOPS MiB/s Average min max 00:33:32.655 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 176.51 0.09 882390.30 565.53 1012431.09 00:33:32.655 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 187.42 0.09 902737.24 416.96 1014301.41 00:33:32.655 ======================================================== 00:33:32.655 Total : 363.93 0.18 892868.70 416.96 1014301.41 00:33:32.655 00:33:33.224 08:30:06 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:33:33.224 08:30:06 -- target/delete_subsystem.sh@35 -- # kill -0 69091 00:33:33.224 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (69091) - No such process 00:33:33.224 08:30:06 -- target/delete_subsystem.sh@45 -- # NOT wait 69091 00:33:33.224 08:30:06 -- common/autotest_common.sh@640 -- # local es=0 00:33:33.224 08:30:06 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 69091 00:33:33.224 08:30:06 -- common/autotest_common.sh@628 -- # local arg=wait 00:33:33.224 08:30:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:33:33.224 08:30:06 -- common/autotest_common.sh@632 -- # type -t wait 00:33:33.224 08:30:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:33:33.224 08:30:06 -- common/autotest_common.sh@643 -- # wait 69091 00:33:33.224 08:30:06 -- common/autotest_common.sh@643 -- # es=1 00:33:33.224 08:30:06 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:33:33.224 08:30:06 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:33:33.224 08:30:06 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:33:33.224 08:30:06 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:33.224 08:30:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:33.224 08:30:06 -- common/autotest_common.sh@10 -- # set +x 00:33:33.224 08:30:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:33.224 08:30:06 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:33.224 08:30:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:33.224 08:30:06 -- common/autotest_common.sh@10 -- # set +x 00:33:33.224 [2024-04-17 08:30:06.391514] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:33.224 08:30:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:33.224 08:30:06 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:33.224 08:30:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:33.224 08:30:06 -- common/autotest_common.sh@10 -- # set +x 00:33:33.224 08:30:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:33.224 08:30:06 -- target/delete_subsystem.sh@54 -- # perf_pid=69141 00:33:33.224 08:30:06 -- target/delete_subsystem.sh@56 -- # delay=0 00:33:33.224 08:30:06 -- target/delete_subsystem.sh@57 -- # kill -0 69141 00:33:33.224 08:30:06 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:33.224 08:30:06 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:33:33.485 [2024-04-17 08:30:06.584073] subsystem.c:1304:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:33:33.747 08:30:06 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:33.747 08:30:06 -- target/delete_subsystem.sh@57 -- # kill -0 69141 00:33:33.747 08:30:06 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:34.317 08:30:07 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:34.317 08:30:07 -- target/delete_subsystem.sh@57 -- # kill -0 69141 00:33:34.317 08:30:07 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:34.914 08:30:07 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:34.914 08:30:07 -- target/delete_subsystem.sh@57 -- # kill -0 69141 00:33:34.914 08:30:07 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:35.173 08:30:08 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:35.173 08:30:08 -- target/delete_subsystem.sh@57 -- # kill -0 69141 00:33:35.173 08:30:08 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:35.741 08:30:08 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:35.741 08:30:08 -- target/delete_subsystem.sh@57 -- # kill -0 69141 00:33:35.741 08:30:08 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:36.310 08:30:09 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:36.310 08:30:09 -- target/delete_subsystem.sh@57 -- # kill -0 69141 00:33:36.310 08:30:09 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:36.570 Initializing NVMe Controllers 00:33:36.570 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:36.570 Controller IO queue size 128, less than required. 00:33:36.570 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:36.570 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:36.570 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:36.570 Initialization complete. Launching workers. 00:33:36.570 ======================================================== 00:33:36.570 Latency(us) 00:33:36.570 Device Information : IOPS MiB/s Average min max 00:33:36.570 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002770.49 1000150.51 1041975.87 00:33:36.570 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004437.35 1000151.73 1011708.63 00:33:36.570 ======================================================== 00:33:36.570 Total : 256.00 0.12 1003603.92 1000150.51 1041975.87 00:33:36.570 00:33:36.828 08:30:09 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:36.828 08:30:09 -- target/delete_subsystem.sh@57 -- # kill -0 69141 00:33:36.828 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (69141) - No such process 00:33:36.828 08:30:09 -- target/delete_subsystem.sh@67 -- # wait 69141 00:33:36.828 08:30:09 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:33:36.828 08:30:09 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:33:36.828 08:30:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:36.828 08:30:09 -- nvmf/common.sh@116 -- # sync 00:33:36.828 08:30:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:36.828 08:30:09 -- nvmf/common.sh@119 -- # set +e 00:33:36.828 08:30:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:36.828 08:30:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:36.828 rmmod nvme_tcp 00:33:36.828 rmmod nvme_fabrics 00:33:36.828 rmmod nvme_keyring 00:33:36.828 08:30:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:36.828 08:30:10 -- nvmf/common.sh@123 -- # set -e 00:33:36.828 08:30:10 -- nvmf/common.sh@124 -- # return 0 00:33:36.828 08:30:10 -- nvmf/common.sh@477 -- # '[' -n 69039 ']' 00:33:36.829 08:30:10 -- nvmf/common.sh@478 -- # killprocess 69039 00:33:36.829 08:30:10 -- common/autotest_common.sh@926 -- # '[' -z 69039 ']' 00:33:36.829 08:30:10 -- common/autotest_common.sh@930 -- # kill -0 69039 00:33:36.829 08:30:10 -- common/autotest_common.sh@931 -- # uname 00:33:36.829 08:30:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:36.829 08:30:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69039 00:33:36.829 killing process with pid 69039 00:33:36.829 08:30:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:36.829 08:30:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:36.829 08:30:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69039' 00:33:36.829 08:30:10 -- common/autotest_common.sh@945 -- # kill 69039 00:33:36.829 08:30:10 -- common/autotest_common.sh@950 -- # wait 69039 00:33:37.088 08:30:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:33:37.088 08:30:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:37.088 08:30:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:37.088 08:30:10 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:37.088 08:30:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:37.088 08:30:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:37.088 08:30:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:37.088 08:30:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:37.088 08:30:10 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:33:37.088 ************************************ 00:33:37.088 END TEST nvmf_delete_subsystem 00:33:37.088 ************************************ 00:33:37.088 00:33:37.088 real 0m9.193s 00:33:37.088 user 0m28.922s 00:33:37.088 sys 0m1.065s 00:33:37.088 08:30:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:37.088 08:30:10 -- common/autotest_common.sh@10 -- # set +x 00:33:37.088 08:30:10 -- nvmf/nvmf.sh@36 -- # [[ 0 -eq 1 ]] 00:33:37.088 08:30:10 -- nvmf/nvmf.sh@39 -- # [[ 1 -eq 1 ]] 00:33:37.088 08:30:10 -- nvmf/nvmf.sh@40 -- # run_test nvmf_vfio_user /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:33:37.088 08:30:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:37.088 08:30:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:37.088 08:30:10 -- common/autotest_common.sh@10 -- # set +x 00:33:37.348 ************************************ 00:33:37.348 START TEST nvmf_vfio_user 00:33:37.348 ************************************ 00:33:37.348 08:30:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:33:37.348 * Looking for test storage... 00:33:37.348 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:37.348 08:30:10 -- target/nvmf_vfio_user.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:37.348 08:30:10 -- nvmf/common.sh@7 -- # uname -s 00:33:37.348 08:30:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:37.348 08:30:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:37.348 08:30:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:37.348 08:30:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:37.348 08:30:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:37.348 08:30:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:37.348 08:30:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:37.348 08:30:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:37.348 08:30:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:37.348 08:30:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:37.348 08:30:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:33:37.348 08:30:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:33:37.348 08:30:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:37.348 08:30:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:37.348 08:30:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:37.348 08:30:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:37.348 08:30:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:37.348 08:30:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:37.348 08:30:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:37.348 08:30:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.348 08:30:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.348 08:30:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.348 08:30:10 -- paths/export.sh@5 -- # export PATH 00:33:37.348 08:30:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.348 08:30:10 -- nvmf/common.sh@46 -- # : 0 00:33:37.348 08:30:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:37.348 08:30:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:37.348 08:30:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:37.348 08:30:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:37.348 08:30:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:37.348 08:30:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:37.348 08:30:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:37.348 08:30:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:37.348 08:30:10 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:33:37.348 08:30:10 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:33:37.348 08:30:10 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:33:37.348 08:30:10 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:37.348 08:30:10 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:33:37.348 08:30:10 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:33:37.348 08:30:10 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:33:37.348 08:30:10 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:33:37.348 08:30:10 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:33:37.348 08:30:10 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:33:37.348 08:30:10 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=69260 00:33:37.348 Process pid: 69260 00:33:37.348 08:30:10 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 69260' 00:33:37.348 08:30:10 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:33:37.348 08:30:10 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 69260 00:33:37.348 08:30:10 -- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:33:37.348 08:30:10 -- common/autotest_common.sh@819 -- # '[' -z 69260 ']' 00:33:37.348 08:30:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:37.348 08:30:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:37.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:37.348 08:30:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:37.348 08:30:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:37.348 08:30:10 -- common/autotest_common.sh@10 -- # set +x 00:33:37.348 [2024-04-17 08:30:10.647684] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:33:37.348 [2024-04-17 08:30:10.647759] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:37.607 [2024-04-17 08:30:10.791250] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:37.607 [2024-04-17 08:30:10.893363] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:37.607 [2024-04-17 08:30:10.893505] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:37.607 [2024-04-17 08:30:10.893514] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:37.607 [2024-04-17 08:30:10.893519] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:37.607 [2024-04-17 08:30:10.893776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:37.607 [2024-04-17 08:30:10.893912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:37.607 [2024-04-17 08:30:10.894039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:37.607 [2024-04-17 08:30:10.894043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:38.545 08:30:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:38.545 08:30:11 -- common/autotest_common.sh@852 -- # return 0 00:33:38.545 08:30:11 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:33:39.484 08:30:12 -- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:33:39.484 08:30:12 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:33:39.484 08:30:12 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:33:39.484 08:30:12 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:33:39.484 08:30:12 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:33:39.484 08:30:12 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:33:39.743 Malloc1 00:33:39.743 08:30:13 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:33:40.010 08:30:13 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:33:40.272 08:30:13 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:33:40.532 08:30:13 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:33:40.532 08:30:13 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:33:40.532 08:30:13 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:33:40.792 Malloc2 00:33:40.792 08:30:13 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:33:40.792 08:30:14 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:33:41.052 08:30:14 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:33:41.312 08:30:14 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:33:41.312 08:30:14 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:33:41.312 08:30:14 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:33:41.312 08:30:14 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:33:41.312 08:30:14 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:33:41.312 08:30:14 -- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:33:41.312 [2024-04-17 08:30:14.606585] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:33:41.312 [2024-04-17 08:30:14.606627] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69396 ] 00:33:41.574 [2024-04-17 08:30:14.736306] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:33:41.574 [2024-04-17 08:30:14.745675] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:33:41.574 [2024-04-17 08:30:14.745704] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f42049f1000 00:33:41.574 [2024-04-17 08:30:14.746668] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:33:41.574 [2024-04-17 08:30:14.747659] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:33:41.574 [2024-04-17 08:30:14.748660] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:33:41.574 [2024-04-17 08:30:14.749661] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:33:41.574 [2024-04-17 08:30:14.750661] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:33:41.574 [2024-04-17 08:30:14.751663] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:33:41.574 [2024-04-17 08:30:14.752664] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:33:41.574 [2024-04-17 08:30:14.753669] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:33:41.574 [2024-04-17 08:30:14.754673] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:33:41.574 [2024-04-17 08:30:14.754692] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f42049e6000 00:33:41.574 [2024-04-17 08:30:14.755751] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:33:41.574 [2024-04-17 08:30:14.768794] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:33:41.574 [2024-04-17 08:30:14.768826] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:33:41.574 [2024-04-17 08:30:14.773724] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:33:41.574 [2024-04-17 08:30:14.773774] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:33:41.574 [2024-04-17 08:30:14.773851] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:33:41.574 [2024-04-17 08:30:14.773868] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:33:41.574 [2024-04-17 08:30:14.773873] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:33:41.574 [2024-04-17 08:30:14.774708] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:33:41.574 [2024-04-17 08:30:14.774725] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:33:41.574 [2024-04-17 08:30:14.774732] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:33:41.574 [2024-04-17 08:30:14.775708] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:33:41.574 [2024-04-17 08:30:14.775725] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:33:41.574 [2024-04-17 08:30:14.775731] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:33:41.574 [2024-04-17 08:30:14.776709] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:33:41.574 [2024-04-17 08:30:14.776722] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:33:41.574 [2024-04-17 08:30:14.777713] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:33:41.574 [2024-04-17 08:30:14.777726] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:33:41.574 [2024-04-17 08:30:14.777730] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:33:41.574 [2024-04-17 08:30:14.777736] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:33:41.574 [2024-04-17 08:30:14.777840] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:33:41.574 [2024-04-17 08:30:14.777844] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:33:41.574 [2024-04-17 08:30:14.777848] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:33:41.574 [2024-04-17 08:30:14.778716] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:33:41.574 [2024-04-17 08:30:14.779713] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:33:41.574 [2024-04-17 08:30:14.780714] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:33:41.574 [2024-04-17 08:30:14.781750] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:33:41.574 [2024-04-17 08:30:14.782725] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:33:41.574 [2024-04-17 08:30:14.782737] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:33:41.574 [2024-04-17 08:30:14.782741] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:33:41.574 [2024-04-17 08:30:14.782759] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:33:41.574 [2024-04-17 08:30:14.782766] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:33:41.574 [2024-04-17 08:30:14.782784] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:33:41.574 [2024-04-17 08:30:14.782788] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:33:41.574 [2024-04-17 08:30:14.782801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:33:41.574 [2024-04-17 08:30:14.782858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:33:41.574 [2024-04-17 08:30:14.782867] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:33:41.574 [2024-04-17 08:30:14.782873] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:33:41.574 [2024-04-17 08:30:14.782876] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:33:41.574 [2024-04-17 08:30:14.782879] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:33:41.574 [2024-04-17 08:30:14.782883] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:33:41.574 [2024-04-17 08:30:14.782886] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:33:41.574 [2024-04-17 08:30:14.782889] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:33:41.574 [2024-04-17 08:30:14.782898] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:33:41.574 [2024-04-17 08:30:14.782906] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:33:41.574 [2024-04-17 08:30:14.782919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:33:41.574 [2024-04-17 08:30:14.782928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:33:41.575 [2024-04-17 08:30:14.782935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:33:41.575 [2024-04-17 08:30:14.782941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:33:41.575 [2024-04-17 08:30:14.782948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:33:41.575 [2024-04-17 08:30:14.782951] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:33:41.575 [2024-04-17 08:30:14.782959] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:33:41.575 [2024-04-17 08:30:14.782966] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:33:41.575 [2024-04-17 08:30:14.782983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:33:41.575 [2024-04-17 08:30:14.782987] nvme_ctrlr.c:2877:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:33:41.575 [2024-04-17 08:30:14.782991] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:33:41.575 [2024-04-17 08:30:14.782996] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:33:41.575 [2024-04-17 08:30:14.783002] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:33:41.575 [2024-04-17 08:30:14.783009] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:33:41.575 [2024-04-17 08:30:14.783030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:33:41.575 [2024-04-17 08:30:14.783072] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:33:41.575 [2024-04-17 08:30:14.783078] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:33:41.575 [2024-04-17 08:30:14.783084] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:33:41.575 [2024-04-17 08:30:14.783087] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:33:41.575 [2024-04-17 08:30:14.783092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:33:41.575 [2024-04-17 08:30:14.783104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:33:41.575 [2024-04-17 08:30:14.783116] nvme_ctrlr.c:4542:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:33:41.575 [2024-04-17 08:30:14.783123] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:33:41.575 [2024-04-17 08:30:14.783129] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:33:41.575 [2024-04-17 08:30:14.783134] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:33:41.575 [2024-04-17 08:30:14.783137] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:33:41.575 [2024-04-17 08:30:14.783143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:33:41.575 [2024-04-17 08:30:14.783173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:33:41.575 [2024-04-17 08:30:14.783184] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:33:41.575 [2024-04-17 08:30:14.783190] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:33:41.575 [2024-04-17 08:30:14.783195] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:33:41.575 [2024-04-17 08:30:14.783198] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:33:41.575 [2024-04-17 08:30:14.783203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:33:41.575 [2024-04-17 08:30:14.783217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:33:41.575 [2024-04-17 08:30:14.783223] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:33:41.575 [2024-04-17 08:30:14.783228] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:33:41.575 [2024-04-17 08:30:14.783235] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:33:41.575 [2024-04-17 08:30:14.783239] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:33:41.575 [2024-04-17 08:30:14.783243] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:33:41.575 [2024-04-17 08:30:14.783247] nvme_ctrlr.c:2977:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:33:41.575 [2024-04-17 08:30:14.783250] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:33:41.575 [2024-04-17 08:30:14.783253] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:33:41.575 [2024-04-17 08:30:14.783286] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:33:41.575 [2024-04-17 08:30:14.783305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:33:41.575 [2024-04-17 08:30:14.783315] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:33:41.575 [2024-04-17 08:30:14.783328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:33:41.575 [2024-04-17 08:30:14.783337] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:33:41.575 [2024-04-17 08:30:14.783349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:33:41.575 [2024-04-17 08:30:14.783359] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:33:41.575 [2024-04-17 08:30:14.783366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:33:41.575 [2024-04-17 08:30:14.783375] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:33:41.575 [2024-04-17 08:30:14.783378] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:33:41.575 [2024-04-17 08:30:14.783381] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:33:41.575 [2024-04-17 08:30:14.783383] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:33:41.575 [2024-04-17 08:30:14.783389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:33:41.575 [2024-04-17 08:30:14.783406] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:33:41.575 [2024-04-17 08:30:14.783410] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:33:41.575 [2024-04-17 08:30:14.783415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:33:41.575 [2024-04-17 08:30:14.783421] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:33:41.575 [2024-04-17 08:30:14.783423] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:33:41.575 [2024-04-17 08:30:14.783428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:33:41.575 [2024-04-17 08:30:14.783434] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:33:41.575 [2024-04-17 08:30:14.783437] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:33:41.575 [2024-04-17 08:30:14.783442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:33:41.575 [2024-04-17 08:30:14.783448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:33:41.575 [2024-04-17 08:30:14.783459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:33:41.575 [2024-04-17 08:30:14.783467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:33:41.575 ===================================================== 00:33:41.575 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:33:41.575 ===================================================== 00:33:41.575 Controller Capabilities/Features 00:33:41.575 ================================ 00:33:41.575 Vendor ID: 4e58 00:33:41.575 Subsystem Vendor ID: 4e58 00:33:41.575 Serial Number: SPDK1 00:33:41.575 Model Number: SPDK bdev Controller 00:33:41.575 Firmware Version: 24.01.1 00:33:41.575 Recommended Arb Burst: 6 00:33:41.575 IEEE OUI Identifier: 8d 6b 50 00:33:41.575 Multi-path I/O 00:33:41.575 May have multiple subsystem ports: Yes 00:33:41.575 May have multiple controllers: Yes 00:33:41.575 Associated with SR-IOV VF: No 00:33:41.575 Max Data Transfer Size: 131072 00:33:41.575 Max Number of Namespaces: 32 00:33:41.575 Max Number of I/O Queues: 127 00:33:41.575 NVMe Specification Version (VS): 1.3 00:33:41.575 NVMe Specification Version (Identify): 1.3 00:33:41.575 Maximum Queue Entries: 256 00:33:41.575 Contiguous Queues Required: Yes 00:33:41.575 Arbitration Mechanisms Supported 00:33:41.575 Weighted Round Robin: Not Supported 00:33:41.575 Vendor Specific: Not Supported 00:33:41.575 Reset Timeout: 15000 ms 00:33:41.575 Doorbell Stride: 4 bytes 00:33:41.575 NVM Subsystem Reset: Not Supported 00:33:41.575 Command Sets Supported 00:33:41.575 NVM Command Set: Supported 00:33:41.575 Boot Partition: Not Supported 00:33:41.575 Memory Page Size Minimum: 4096 bytes 00:33:41.575 Memory Page Size Maximum: 4096 bytes 00:33:41.575 Persistent Memory Region: Not Supported 00:33:41.575 Optional Asynchronous Events Supported 00:33:41.576 Namespace Attribute Notices: Supported 00:33:41.576 Firmware Activation Notices: Not Supported 00:33:41.576 ANA Change Notices: Not Supported 00:33:41.576 PLE Aggregate Log Change Notices: Not Supported 00:33:41.576 LBA Status Info Alert Notices: Not Supported 00:33:41.576 EGE Aggregate Log Change Notices: Not Supported 00:33:41.576 Normal NVM Subsystem Shutdown event: Not Supported 00:33:41.576 Zone Descriptor Change Notices: Not Supported 00:33:41.576 Discovery Log Change Notices: Not Supported 00:33:41.576 Controller Attributes 00:33:41.576 128-bit Host Identifier: Supported 00:33:41.576 Non-Operational Permissive Mode: Not Supported 00:33:41.576 NVM Sets: Not Supported 00:33:41.576 Read Recovery Levels: Not Supported 00:33:41.576 Endurance Groups: Not Supported 00:33:41.576 Predictable Latency Mode: Not Supported 00:33:41.576 Traffic Based Keep ALive: Not Supported 00:33:41.576 Namespace Granularity: Not Supported 00:33:41.576 SQ Associations: Not Supported 00:33:41.576 UUID List: Not Supported 00:33:41.576 Multi-Domain Subsystem: Not Supported 00:33:41.576 Fixed Capacity Management: Not Supported 00:33:41.576 Variable Capacity Management: Not Supported 00:33:41.576 Delete Endurance Group: Not Supported 00:33:41.576 Delete NVM Set: Not Supported 00:33:41.576 Extended LBA Formats Supported: Not Supported 00:33:41.576 Flexible Data Placement Supported: Not Supported 00:33:41.576 00:33:41.576 Controller Memory Buffer Support 00:33:41.576 ================================ 00:33:41.576 Supported: No 00:33:41.576 00:33:41.576 Persistent Memory Region Support 00:33:41.576 ================================ 00:33:41.576 Supported: No 00:33:41.576 00:33:41.576 Admin Command Set Attributes 00:33:41.576 ============================ 00:33:41.576 Security Send/Receive: Not Supported 00:33:41.576 Format NVM: Not Supported 00:33:41.576 Firmware Activate/Download: Not Supported 00:33:41.576 Namespace Management: Not Supported 00:33:41.576 Device Self-Test: Not Supported 00:33:41.576 Directives: Not Supported 00:33:41.576 NVMe-MI: Not Supported 00:33:41.576 Virtualization Management: Not Supported 00:33:41.576 Doorbell Buffer Config: Not Supported 00:33:41.576 Get LBA Status Capability: Not Supported 00:33:41.576 Command & Feature Lockdown Capability: Not Supported 00:33:41.576 Abort Command Limit: 4 00:33:41.576 Async Event Request Limit: 4 00:33:41.576 Number of Firmware Slots: N/A 00:33:41.576 Firmware Slot 1 Read-Only: N/A 00:33:41.576 Firmware Activation Wit[2024-04-17 08:30:14.783473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:33:41.576 hout Reset: N/A 00:33:41.576 Multiple Update Detection Support: N/A 00:33:41.576 Firmware Update Granularity: No Information Provided 00:33:41.576 Per-Namespace SMART Log: No 00:33:41.576 Asymmetric Namespace Access Log Page: Not Supported 00:33:41.576 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:33:41.576 Command Effects Log Page: Supported 00:33:41.576 Get Log Page Extended Data: Supported 00:33:41.576 Telemetry Log Pages: Not Supported 00:33:41.576 Persistent Event Log Pages: Not Supported 00:33:41.576 Supported Log Pages Log Page: May Support 00:33:41.576 Commands Supported & Effects Log Page: Not Supported 00:33:41.576 Feature Identifiers & Effects Log Page:May Support 00:33:41.576 NVMe-MI Commands & Effects Log Page: May Support 00:33:41.576 Data Area 4 for Telemetry Log: Not Supported 00:33:41.576 Error Log Page Entries Supported: 128 00:33:41.576 Keep Alive: Supported 00:33:41.576 Keep Alive Granularity: 10000 ms 00:33:41.576 00:33:41.576 NVM Command Set Attributes 00:33:41.576 ========================== 00:33:41.576 Submission Queue Entry Size 00:33:41.576 Max: 64 00:33:41.576 Min: 64 00:33:41.576 Completion Queue Entry Size 00:33:41.576 Max: 16 00:33:41.576 Min: 16 00:33:41.576 Number of Namespaces: 32 00:33:41.576 Compare Command: Supported 00:33:41.576 Write Uncorrectable Command: Not Supported 00:33:41.576 Dataset Management Command: Supported 00:33:41.576 Write Zeroes Command: Supported 00:33:41.576 Set Features Save Field: Not Supported 00:33:41.576 Reservations: Not Supported 00:33:41.576 Timestamp: Not Supported 00:33:41.576 Copy: Supported 00:33:41.576 Volatile Write Cache: Present 00:33:41.576 Atomic Write Unit (Normal): 1 00:33:41.576 Atomic Write Unit (PFail): 1 00:33:41.576 Atomic Compare & Write Unit: 1 00:33:41.576 Fused Compare & Write: Supported 00:33:41.576 Scatter-Gather List 00:33:41.576 SGL Command Set: Supported (Dword aligned) 00:33:41.576 SGL Keyed: Not Supported 00:33:41.576 SGL Bit Bucket Descriptor: Not Supported 00:33:41.576 SGL Metadata Pointer: Not Supported 00:33:41.576 Oversized SGL: Not Supported 00:33:41.576 SGL Metadata Address: Not Supported 00:33:41.576 SGL Offset: Not Supported 00:33:41.576 Transport SGL Data Block: Not Supported 00:33:41.576 Replay Protected Memory Block: Not Supported 00:33:41.576 00:33:41.576 Firmware Slot Information 00:33:41.576 ========================= 00:33:41.576 Active slot: 1 00:33:41.576 Slot 1 Firmware Revision: 24.01.1 00:33:41.576 00:33:41.576 00:33:41.576 Commands Supported and Effects 00:33:41.576 ============================== 00:33:41.576 Admin Commands 00:33:41.576 -------------- 00:33:41.576 Get Log Page (02h): Supported 00:33:41.576 Identify (06h): Supported 00:33:41.576 Abort (08h): Supported 00:33:41.576 Set Features (09h): Supported 00:33:41.576 Get Features (0Ah): Supported 00:33:41.576 Asynchronous Event Request (0Ch): Supported 00:33:41.576 Keep Alive (18h): Supported 00:33:41.576 I/O Commands 00:33:41.576 ------------ 00:33:41.576 Flush (00h): Supported LBA-Change 00:33:41.576 Write (01h): Supported LBA-Change 00:33:41.576 Read (02h): Supported 00:33:41.576 Compare (05h): Supported 00:33:41.576 Write Zeroes (08h): Supported LBA-Change 00:33:41.576 Dataset Management (09h): Supported LBA-Change 00:33:41.576 Copy (19h): Supported LBA-Change 00:33:41.576 Unknown (79h): Supported LBA-Change 00:33:41.576 Unknown (7Ah): Supported 00:33:41.576 00:33:41.576 Error Log 00:33:41.576 ========= 00:33:41.576 00:33:41.576 Arbitration 00:33:41.576 =========== 00:33:41.576 Arbitration Burst: 1 00:33:41.576 00:33:41.576 Power Management 00:33:41.576 ================ 00:33:41.576 Number of Power States: 1 00:33:41.576 Current Power State: Power State #0 00:33:41.576 Power State #0: 00:33:41.576 Max Power: 0.00 W 00:33:41.576 Non-Operational State: Operational 00:33:41.576 Entry Latency: Not Reported 00:33:41.576 Exit Latency: Not Reported 00:33:41.576 Relative Read Throughput: 0 00:33:41.576 Relative Read Latency: 0 00:33:41.576 Relative Write Throughput: 0 00:33:41.576 Relative Write Latency: 0 00:33:41.576 Idle Power: Not Reported 00:33:41.576 Active Power: Not Reported 00:33:41.576 Non-Operational Permissive Mode: Not Supported 00:33:41.576 00:33:41.576 Health Information 00:33:41.576 ================== 00:33:41.576 Critical Warnings: 00:33:41.576 Available Spare Space: OK 00:33:41.576 Temperature: OK 00:33:41.576 Device Reliability: OK 00:33:41.576 Read Only: No 00:33:41.576 Volatile Memory Backup: OK 00:33:41.576 Current Temperature: 0 Kelvin[2024-04-17 08:30:14.783575] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:33:41.576 [2024-04-17 08:30:14.783586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:33:41.576 [2024-04-17 08:30:14.783612] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:33:41.576 [2024-04-17 08:30:14.783619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.576 [2024-04-17 08:30:14.783624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.576 [2024-04-17 08:30:14.783629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.576 [2024-04-17 08:30:14.783634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.576 [2024-04-17 08:30:14.787405] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:33:41.576 [2024-04-17 08:30:14.787424] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:33:41.576 [2024-04-17 08:30:14.787763] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:33:41.577 [2024-04-17 08:30:14.787773] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:33:41.577 [2024-04-17 08:30:14.788727] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:33:41.577 [2024-04-17 08:30:14.788742] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:33:41.577 [2024-04-17 08:30:14.788845] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:33:41.577 [2024-04-17 08:30:14.790787] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:33:41.577 (-273 Celsius) 00:33:41.577 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:33:41.577 Available Spare: 0% 00:33:41.577 Available Spare Threshold: 0% 00:33:41.577 Life Percentage Used: 0% 00:33:41.577 Data Units Read: 0 00:33:41.577 Data Units Written: 0 00:33:41.577 Host Read Commands: 0 00:33:41.577 Host Write Commands: 0 00:33:41.577 Controller Busy Time: 0 minutes 00:33:41.577 Power Cycles: 0 00:33:41.577 Power On Hours: 0 hours 00:33:41.577 Unsafe Shutdowns: 0 00:33:41.577 Unrecoverable Media Errors: 0 00:33:41.577 Lifetime Error Log Entries: 0 00:33:41.577 Warning Temperature Time: 0 minutes 00:33:41.577 Critical Temperature Time: 0 minutes 00:33:41.577 00:33:41.577 Number of Queues 00:33:41.577 ================ 00:33:41.577 Number of I/O Submission Queues: 127 00:33:41.577 Number of I/O Completion Queues: 127 00:33:41.577 00:33:41.577 Active Namespaces 00:33:41.577 ================= 00:33:41.577 Namespace ID:1 00:33:41.577 Error Recovery Timeout: Unlimited 00:33:41.577 Command Set Identifier: NVM (00h) 00:33:41.577 Deallocate: Supported 00:33:41.577 Deallocated/Unwritten Error: Not Supported 00:33:41.577 Deallocated Read Value: Unknown 00:33:41.577 Deallocate in Write Zeroes: Not Supported 00:33:41.577 Deallocated Guard Field: 0xFFFF 00:33:41.577 Flush: Supported 00:33:41.577 Reservation: Supported 00:33:41.577 Namespace Sharing Capabilities: Multiple Controllers 00:33:41.577 Size (in LBAs): 131072 (0GiB) 00:33:41.577 Capacity (in LBAs): 131072 (0GiB) 00:33:41.577 Utilization (in LBAs): 131072 (0GiB) 00:33:41.577 NGUID: E959E8613921494B804DEE41C5FB4A7F 00:33:41.577 UUID: e959e861-3921-494b-804d-ee41c5fb4a7f 00:33:41.577 Thin Provisioning: Not Supported 00:33:41.577 Per-NS Atomic Units: Yes 00:33:41.577 Atomic Boundary Size (Normal): 0 00:33:41.577 Atomic Boundary Size (PFail): 0 00:33:41.577 Atomic Boundary Offset: 0 00:33:41.577 Maximum Single Source Range Length: 65535 00:33:41.577 Maximum Copy Length: 65535 00:33:41.577 Maximum Source Range Count: 1 00:33:41.577 NGUID/EUI64 Never Reused: No 00:33:41.577 Namespace Write Protected: No 00:33:41.577 Number of LBA Formats: 1 00:33:41.577 Current LBA Format: LBA Format #00 00:33:41.577 LBA Format #00: Data Size: 512 Metadata Size: 0 00:33:41.577 00:33:41.577 08:30:14 -- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:33:46.852 Initializing NVMe Controllers 00:33:46.852 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:33:46.852 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:33:46.852 Initialization complete. Launching workers. 00:33:46.852 ======================================================== 00:33:46.852 Latency(us) 00:33:46.852 Device Information : IOPS MiB/s Average min max 00:33:46.852 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 36696.02 143.34 3487.59 1031.54 10494.98 00:33:46.852 ======================================================== 00:33:46.852 Total : 36696.02 143.34 3487.59 1031.54 10494.98 00:33:46.852 00:33:46.852 08:30:20 -- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:33:52.131 Initializing NVMe Controllers 00:33:52.131 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:33:52.131 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:33:52.131 Initialization complete. Launching workers. 00:33:52.131 ======================================================== 00:33:52.131 Latency(us) 00:33:52.131 Device Information : IOPS MiB/s Average min max 00:33:52.131 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15897.60 62.10 8059.89 5984.20 16019.61 00:33:52.131 ======================================================== 00:33:52.131 Total : 15897.60 62.10 8059.89 5984.20 16019.61 00:33:52.131 00:33:52.131 08:30:25 -- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:33:57.441 Initializing NVMe Controllers 00:33:57.441 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:33:57.441 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:33:57.441 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:33:57.441 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:33:57.441 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:33:57.441 Initialization complete. Launching workers. 00:33:57.441 Starting thread on core 2 00:33:57.441 Starting thread on core 3 00:33:57.441 Starting thread on core 1 00:33:57.441 08:30:30 -- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:34:01.683 Initializing NVMe Controllers 00:34:01.683 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:34:01.683 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:34:01.683 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:34:01.683 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:34:01.683 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:34:01.683 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:34:01.683 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:34:01.683 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:34:01.683 Initialization complete. Launching workers. 00:34:01.683 Starting thread on core 1 with urgent priority queue 00:34:01.683 Starting thread on core 2 with urgent priority queue 00:34:01.683 Starting thread on core 3 with urgent priority queue 00:34:01.683 Starting thread on core 0 with urgent priority queue 00:34:01.684 SPDK bdev Controller (SPDK1 ) core 0: 6801.67 IO/s 14.70 secs/100000 ios 00:34:01.684 SPDK bdev Controller (SPDK1 ) core 1: 7301.00 IO/s 13.70 secs/100000 ios 00:34:01.684 SPDK bdev Controller (SPDK1 ) core 2: 7770.67 IO/s 12.87 secs/100000 ios 00:34:01.684 SPDK bdev Controller (SPDK1 ) core 3: 7164.67 IO/s 13.96 secs/100000 ios 00:34:01.684 ======================================================== 00:34:01.684 00:34:01.684 08:30:34 -- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:34:01.684 Initializing NVMe Controllers 00:34:01.684 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:34:01.684 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:34:01.684 Namespace ID: 1 size: 0GB 00:34:01.684 Initialization complete. 00:34:01.684 INFO: using host memory buffer for IO 00:34:01.684 Hello world! 00:34:01.684 08:30:34 -- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:34:02.619 Initializing NVMe Controllers 00:34:02.619 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:34:02.619 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:34:02.619 Initialization complete. Launching workers. 00:34:02.619 submit (in ns) avg, min, max = 9457.8, 3772.1, 4041262.0 00:34:02.619 complete (in ns) avg, min, max = 22957.8, 2132.8, 4031457.6 00:34:02.619 00:34:02.619 Submit histogram 00:34:02.619 ================ 00:34:02.619 Range in us Cumulative Count 00:34:02.619 3.745 - 3.773: 0.0072% ( 1) 00:34:02.619 3.773 - 3.801: 3.0420% ( 423) 00:34:02.619 3.801 - 3.829: 16.9321% ( 1936) 00:34:02.619 3.829 - 3.857: 37.0641% ( 2806) 00:34:02.619 3.857 - 3.885: 55.1586% ( 2522) 00:34:02.619 3.885 - 3.913: 65.7340% ( 1474) 00:34:02.619 3.913 - 3.941: 70.2970% ( 636) 00:34:02.619 3.941 - 3.969: 72.6575% ( 329) 00:34:02.619 3.969 - 3.997: 74.0709% ( 197) 00:34:02.619 3.997 - 4.024: 74.9749% ( 126) 00:34:02.619 4.024 - 4.052: 75.7928% ( 114) 00:34:02.619 4.052 - 4.080: 77.0197% ( 171) 00:34:02.619 4.080 - 4.108: 80.3343% ( 462) 00:34:02.619 4.108 - 4.136: 85.2346% ( 683) 00:34:02.619 4.136 - 4.164: 89.6111% ( 610) 00:34:02.619 4.164 - 4.192: 93.1841% ( 498) 00:34:02.619 4.192 - 4.220: 95.4298% ( 313) 00:34:02.619 4.220 - 4.248: 96.7786% ( 188) 00:34:02.619 4.248 - 4.276: 97.4171% ( 89) 00:34:02.619 4.276 - 4.304: 97.7400% ( 45) 00:34:02.619 4.304 - 4.332: 97.8620% ( 17) 00:34:02.619 4.332 - 4.360: 97.9337% ( 10) 00:34:02.619 4.360 - 4.388: 98.0126% ( 11) 00:34:02.619 4.388 - 4.416: 98.0700% ( 8) 00:34:02.619 4.416 - 4.444: 98.1418% ( 10) 00:34:02.619 4.444 - 4.472: 98.2207% ( 11) 00:34:02.619 4.472 - 4.500: 98.2924% ( 10) 00:34:02.619 4.500 - 4.528: 98.3355% ( 6) 00:34:02.619 4.528 - 4.555: 98.3714% ( 5) 00:34:02.619 4.555 - 4.583: 98.4503% ( 11) 00:34:02.619 4.583 - 4.611: 98.4790% ( 4) 00:34:02.619 4.611 - 4.639: 98.5436% ( 9) 00:34:02.619 4.639 - 4.667: 98.5938% ( 7) 00:34:02.619 4.667 - 4.695: 98.6583% ( 9) 00:34:02.619 4.695 - 4.723: 98.7014% ( 6) 00:34:02.619 4.723 - 4.751: 98.7444% ( 6) 00:34:02.619 4.751 - 4.779: 98.7947% ( 7) 00:34:02.619 4.779 - 4.807: 98.8162% ( 3) 00:34:02.619 4.807 - 4.835: 98.8377% ( 3) 00:34:02.619 4.835 - 4.863: 98.8592% ( 3) 00:34:02.619 4.863 - 4.891: 98.8951% ( 5) 00:34:02.619 4.891 - 4.919: 98.9166% ( 3) 00:34:02.619 4.919 - 4.947: 98.9382% ( 3) 00:34:02.619 4.947 - 4.975: 98.9669% ( 4) 00:34:02.619 4.975 - 5.003: 98.9884% ( 3) 00:34:02.619 5.003 - 5.031: 99.0027% ( 2) 00:34:02.619 5.031 - 5.059: 99.0171% ( 2) 00:34:02.619 5.059 - 5.086: 99.0314% ( 2) 00:34:02.619 5.114 - 5.142: 99.0458% ( 2) 00:34:02.619 5.142 - 5.170: 99.0529% ( 1) 00:34:02.619 5.170 - 5.198: 99.0673% ( 2) 00:34:02.619 5.226 - 5.254: 99.0745% ( 1) 00:34:02.619 5.254 - 5.282: 99.1032% ( 4) 00:34:02.619 5.282 - 5.310: 99.1175% ( 2) 00:34:02.619 5.310 - 5.338: 99.1319% ( 2) 00:34:02.619 5.338 - 5.366: 99.1462% ( 2) 00:34:02.619 5.366 - 5.394: 99.1606% ( 2) 00:34:02.619 5.450 - 5.478: 99.1893% ( 4) 00:34:02.619 5.478 - 5.506: 99.2036% ( 2) 00:34:02.619 5.590 - 5.617: 99.2108% ( 1) 00:34:02.619 5.645 - 5.673: 99.2251% ( 2) 00:34:02.619 5.701 - 5.729: 99.2323% ( 1) 00:34:02.619 5.785 - 5.813: 99.2395% ( 1) 00:34:02.619 5.953 - 5.981: 99.2467% ( 1) 00:34:02.619 6.148 - 6.176: 99.2538% ( 1) 00:34:02.619 6.260 - 6.288: 99.2610% ( 1) 00:34:02.619 6.400 - 6.428: 99.2682% ( 1) 00:34:02.619 6.596 - 6.624: 99.2754% ( 1) 00:34:02.619 6.624 - 6.652: 99.2825% ( 1) 00:34:02.619 7.658 - 7.714: 99.2897% ( 1) 00:34:02.619 7.993 - 8.049: 99.2969% ( 1) 00:34:02.619 8.328 - 8.384: 99.3041% ( 1) 00:34:02.619 8.552 - 8.608: 99.3184% ( 2) 00:34:02.619 8.776 - 8.831: 99.3256% ( 1) 00:34:02.619 8.943 - 8.999: 99.3328% ( 1) 00:34:02.619 8.999 - 9.055: 99.3471% ( 2) 00:34:02.619 9.055 - 9.111: 99.3543% ( 1) 00:34:02.619 9.111 - 9.167: 99.3830% ( 4) 00:34:02.619 9.167 - 9.223: 99.4045% ( 3) 00:34:02.619 9.223 - 9.279: 99.4117% ( 1) 00:34:02.619 9.334 - 9.390: 99.4260% ( 2) 00:34:02.619 9.390 - 9.446: 99.4619% ( 5) 00:34:02.619 9.446 - 9.502: 99.4763% ( 2) 00:34:02.619 9.502 - 9.558: 99.4906% ( 2) 00:34:02.619 9.558 - 9.614: 99.5265% ( 5) 00:34:02.619 9.614 - 9.670: 99.5336% ( 1) 00:34:02.619 9.670 - 9.726: 99.5408% ( 1) 00:34:02.619 9.782 - 9.838: 99.5480% ( 1) 00:34:02.620 9.893 - 9.949: 99.5552% ( 1) 00:34:02.620 9.949 - 10.005: 99.5695% ( 2) 00:34:02.620 10.005 - 10.061: 99.5767% ( 1) 00:34:02.620 10.061 - 10.117: 99.5839% ( 1) 00:34:02.620 10.117 - 10.173: 99.6197% ( 5) 00:34:02.620 10.173 - 10.229: 99.6628% ( 6) 00:34:02.620 10.229 - 10.285: 99.6700% ( 1) 00:34:02.620 10.285 - 10.341: 99.6915% ( 3) 00:34:02.620 10.341 - 10.397: 99.6987% ( 1) 00:34:02.620 10.508 - 10.564: 99.7058% ( 1) 00:34:02.620 10.564 - 10.620: 99.7130% ( 1) 00:34:02.620 10.620 - 10.676: 99.7202% ( 1) 00:34:02.620 10.900 - 10.955: 99.7274% ( 1) 00:34:02.620 10.955 - 11.011: 99.7345% ( 1) 00:34:02.620 11.011 - 11.067: 99.7417% ( 1) 00:34:02.620 11.179 - 11.235: 99.7489% ( 1) 00:34:02.620 11.235 - 11.291: 99.7561% ( 1) 00:34:02.620 11.459 - 11.514: 99.7704% ( 2) 00:34:02.620 11.794 - 11.850: 99.7776% ( 1) 00:34:02.620 11.906 - 11.962: 99.7919% ( 2) 00:34:02.620 12.129 - 12.185: 99.7991% ( 1) 00:34:02.620 12.409 - 12.465: 99.8063% ( 1) 00:34:02.620 12.521 - 12.576: 99.8135% ( 1) 00:34:02.620 12.744 - 12.800: 99.8206% ( 1) 00:34:02.620 13.079 - 13.135: 99.8278% ( 1) 00:34:02.620 13.583 - 13.638: 99.8350% ( 1) 00:34:02.620 14.533 - 14.645: 99.8422% ( 1) 00:34:02.620 18.445 - 18.557: 99.8493% ( 1) 00:34:02.620 19.899 - 20.010: 99.8565% ( 1) 00:34:02.620 21.799 - 21.911: 99.8637% ( 1) 00:34:02.620 4006.568 - 4035.186: 99.9857% ( 17) 00:34:02.620 4035.186 - 4063.804: 100.0000% ( 2) 00:34:02.620 00:34:02.620 Complete histogram 00:34:02.620 ================== 00:34:02.620 Range in us Cumulative Count 00:34:02.620 2.124 - 2.138: 0.0359% ( 5) 00:34:02.620 2.138 - 2.152: 3.0133% ( 415) 00:34:02.620 2.152 - 2.166: 31.1451% ( 3921) 00:34:02.620 2.166 - 2.180: 69.2926% ( 5317) 00:34:02.620 2.180 - 2.194: 80.6141% ( 1578) 00:34:02.620 2.194 - 2.208: 84.9333% ( 602) 00:34:02.620 2.208 - 2.222: 88.3341% ( 474) 00:34:02.620 2.222 - 2.236: 91.1393% ( 391) 00:34:02.620 2.236 - 2.250: 93.9518% ( 392) 00:34:02.620 2.250 - 2.264: 95.5804% ( 227) 00:34:02.620 2.264 - 2.278: 96.2333% ( 91) 00:34:02.620 2.278 - 2.292: 96.5490% ( 44) 00:34:02.620 2.292 - 2.306: 96.7858% ( 33) 00:34:02.620 2.306 - 2.320: 96.9221% ( 19) 00:34:02.620 2.320 - 2.334: 97.0010% ( 11) 00:34:02.620 2.334 - 2.348: 97.0728% ( 10) 00:34:02.620 2.348 - 2.362: 97.1014% ( 4) 00:34:02.620 2.362 - 2.376: 97.1947% ( 13) 00:34:02.620 2.376 - 2.390: 97.2378% ( 6) 00:34:02.620 2.390 - 2.403: 97.3310% ( 13) 00:34:02.620 2.403 - 2.417: 97.3956% ( 9) 00:34:02.620 2.417 - 2.431: 97.4458% ( 7) 00:34:02.620 2.431 - 2.445: 97.6037% ( 22) 00:34:02.620 2.445 - 2.459: 97.6611% ( 8) 00:34:02.620 2.459 - 2.473: 97.7472% ( 12) 00:34:02.620 2.473 - 2.487: 97.8189% ( 10) 00:34:02.620 2.487 - 2.501: 97.9481% ( 18) 00:34:02.620 2.501 - 2.515: 98.0628% ( 16) 00:34:02.620 2.515 - 2.529: 98.1418% ( 11) 00:34:02.620 2.529 - 2.543: 98.1992% ( 8) 00:34:02.620 2.543 - 2.557: 98.2494% ( 7) 00:34:02.620 2.557 - 2.571: 98.2709% ( 3) 00:34:02.620 2.571 - 2.585: 98.2924% ( 3) 00:34:02.620 2.585 - 2.599: 98.3427% ( 7) 00:34:02.620 2.599 - 2.613: 98.3857% ( 6) 00:34:02.620 2.613 - 2.627: 98.4001% ( 2) 00:34:02.620 2.627 - 2.641: 98.4503% ( 7) 00:34:02.620 2.641 - 2.655: 98.4862% ( 5) 00:34:02.620 2.655 - 2.669: 98.5149% ( 4) 00:34:02.620 2.669 - 2.683: 98.5220% ( 1) 00:34:02.620 2.683 - 2.697: 98.5436% ( 3) 00:34:02.620 2.697 - 2.711: 98.5794% ( 5) 00:34:02.620 2.711 - 2.725: 98.6153% ( 5) 00:34:02.620 2.725 - 2.739: 98.6225% ( 1) 00:34:02.620 2.739 - 2.753: 98.6440% ( 3) 00:34:02.620 2.753 - 2.767: 98.6799% ( 5) 00:34:02.620 2.767 - 2.781: 98.6942% ( 2) 00:34:02.620 2.781 - 2.795: 98.7229% ( 4) 00:34:02.620 2.795 - 2.809: 98.7373% ( 2) 00:34:02.620 2.823 - 2.837: 98.7516% ( 2) 00:34:02.620 2.837 - 2.851: 98.7660% ( 2) 00:34:02.620 2.851 - 2.865: 98.8018% ( 5) 00:34:02.620 2.865 - 2.879: 98.8234% ( 3) 00:34:02.620 2.893 - 2.907: 98.8377% ( 2) 00:34:02.620 2.907 - 2.921: 98.8521% ( 2) 00:34:02.620 2.934 - 2.948: 98.8736% ( 3) 00:34:02.620 2.976 - 2.990: 98.8808% ( 1) 00:34:02.620 3.004 - 3.018: 98.8879% ( 1) 00:34:02.620 3.032 - 3.046: 98.8951% ( 1) 00:34:02.620 3.046 - 3.060: 98.9023% ( 1) 00:34:02.620 3.060 - 3.074: 98.9095% ( 1) 00:34:02.620 3.088 - 3.102: 98.9238% ( 2) 00:34:02.620 3.116 - 3.130: 98.9310% ( 1) 00:34:02.620 3.130 - 3.144: 98.9597% ( 4) 00:34:02.620 3.158 - 3.172: 98.9740% ( 2) 00:34:02.620 3.186 - 3.200: 98.9812% ( 1) 00:34:02.620 3.200 - 3.214: 98.9884% ( 1) 00:34:02.620 3.214 - 3.228: 99.0243% ( 5) 00:34:02.620 3.228 - 3.242: 99.0314% ( 1) 00:34:02.620 3.256 - 3.270: 99.0386% ( 1) 00:34:02.620 3.298 - 3.312: 99.0458% ( 1) 00:34:02.620 3.312 - 3.326: 99.0601% ( 2) 00:34:02.620 3.424 - 3.438: 99.0673% ( 1) 00:34:02.620 3.452 - 3.466: 99.0816% ( 2) 00:34:02.620 3.466 - 3.479: 99.0888% ( 1) 00:34:02.620 3.479 - 3.493: 99.0960% ( 1) 00:34:02.620 3.605 - 3.633: 99.1032% ( 1) 00:34:02.620 3.689 - 3.717: 99.1103% ( 1) 00:34:02.620 3.773 - 3.801: 99.1247% ( 2) 00:34:02.620 3.857 - 3.885: 99.1319% ( 1) 00:34:02.620 4.220 - 4.248: 99.1390% ( 1) 00:34:02.620 5.059 - 5.086: 99.1462% ( 1) 00:34:02.620 5.673 - 5.701: 99.1534% ( 1) 00:34:02.620 6.652 - 6.679: 99.1606% ( 1) 00:34:02.620 6.679 - 6.707: 99.1677% ( 1) 00:34:02.620 7.071 - 7.099: 99.1749% ( 1) 00:34:02.620 7.210 - 7.266: 99.1821% ( 1) 00:34:02.620 7.266 - 7.322: 99.1893% ( 1) 00:34:02.620 7.322 - 7.378: 99.1964% ( 1) 00:34:02.620 7.378 - 7.434: 99.2036% ( 1) 00:34:02.620 7.434 - 7.490: 99.2108% ( 1) 00:34:02.620 7.546 - 7.602: 99.2180% ( 1) 00:34:02.620 7.602 - 7.658: 99.2251% ( 1) 00:34:02.620 7.658 - 7.714: 99.2323% ( 1) 00:34:02.620 7.714 - 7.769: 99.2467% ( 2) 00:34:02.620 7.993 - 8.049: 99.2538% ( 1) 00:34:02.620 8.049 - 8.105: 99.2682% ( 2) 00:34:02.620 8.105 - 8.161: 99.2825% ( 2) 00:34:02.620 8.161 - 8.217: 99.2897% ( 1) 00:34:02.620 8.217 - 8.272: 99.3041% ( 2) 00:34:02.620 8.328 - 8.384: 99.3112% ( 1) 00:34:02.620 8.887 - 8.943: 99.3184% ( 1) 00:34:02.620 8.999 - 9.055: 99.3256% ( 1) 00:34:02.620 9.111 - 9.167: 99.3399% ( 2) 00:34:02.620 9.390 - 9.446: 99.3543% ( 2) 00:34:02.620 9.614 - 9.670: 99.3615% ( 1) 00:34:02.620 9.782 - 9.838: 99.3686% ( 1) 00:34:02.620 9.838 - 9.893: 99.3830% ( 2) 00:34:02.620 10.061 - 10.117: 99.3902% ( 1) 00:34:02.620 10.508 - 10.564: 99.4045% ( 2) 00:34:02.620 10.900 - 10.955: 99.4117% ( 1) 00:34:02.620 11.179 - 11.235: 99.4260% ( 2) 00:34:02.620 11.347 - 11.403: 99.4332% ( 1) 00:34:02.620 11.626 - 11.682: 99.4404% ( 1) 00:34:02.620 12.073 - 12.129: 99.4476% ( 1) 00:34:02.620 15.539 - 15.651: 99.4547% ( 1) 00:34:02.620 17.998 - 18.110: 99.4619% ( 1) 00:34:02.620 20.234 - 20.346: 99.4691% ( 1) 00:34:02.620 21.240 - 21.352: 99.4763% ( 1) 00:34:02.620 27.500 - 27.612: 99.4834% ( 1) 00:34:02.620 3863.476 - 3892.094: 99.4906% ( 1) 00:34:02.620 4006.568 - 4035.186: 100.0000% ( 71) 00:34:02.620 00:34:02.620 08:30:35 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:34:02.620 08:30:35 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:34:02.620 08:30:35 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:34:02.620 08:30:35 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:34:02.620 08:30:35 -- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:34:02.880 [2024-04-17 08:30:36.081635] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:34:02.880 [ 00:34:02.880 { 00:34:02.880 "allow_any_host": true, 00:34:02.880 "hosts": [], 00:34:02.880 "listen_addresses": [], 00:34:02.880 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:02.880 "subtype": "Discovery" 00:34:02.880 }, 00:34:02.880 { 00:34:02.880 "allow_any_host": true, 00:34:02.880 "hosts": [], 00:34:02.880 "listen_addresses": [ 00:34:02.880 { 00:34:02.880 "adrfam": "IPv4", 00:34:02.880 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:34:02.880 "transport": "VFIOUSER", 00:34:02.880 "trsvcid": "0", 00:34:02.880 "trtype": "VFIOUSER" 00:34:02.880 } 00:34:02.880 ], 00:34:02.880 "max_cntlid": 65519, 00:34:02.880 "max_namespaces": 32, 00:34:02.880 "min_cntlid": 1, 00:34:02.880 "model_number": "SPDK bdev Controller", 00:34:02.880 "namespaces": [ 00:34:02.880 { 00:34:02.880 "bdev_name": "Malloc1", 00:34:02.880 "name": "Malloc1", 00:34:02.881 "nguid": "E959E8613921494B804DEE41C5FB4A7F", 00:34:02.881 "nsid": 1, 00:34:02.881 "uuid": "e959e861-3921-494b-804d-ee41c5fb4a7f" 00:34:02.881 } 00:34:02.881 ], 00:34:02.881 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:34:02.881 "serial_number": "SPDK1", 00:34:02.881 "subtype": "NVMe" 00:34:02.881 }, 00:34:02.881 { 00:34:02.881 "allow_any_host": true, 00:34:02.881 "hosts": [], 00:34:02.881 "listen_addresses": [ 00:34:02.881 { 00:34:02.881 "adrfam": "IPv4", 00:34:02.881 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:34:02.881 "transport": "VFIOUSER", 00:34:02.881 "trsvcid": "0", 00:34:02.881 "trtype": "VFIOUSER" 00:34:02.881 } 00:34:02.881 ], 00:34:02.881 "max_cntlid": 65519, 00:34:02.881 "max_namespaces": 32, 00:34:02.881 "min_cntlid": 1, 00:34:02.881 "model_number": "SPDK bdev Controller", 00:34:02.881 "namespaces": [ 00:34:02.881 { 00:34:02.881 "bdev_name": "Malloc2", 00:34:02.881 "name": "Malloc2", 00:34:02.881 "nguid": "E756C70D3FD14AD594EBE398F3627155", 00:34:02.881 "nsid": 1, 00:34:02.881 "uuid": "e756c70d-3fd1-4ad5-94eb-e398f3627155" 00:34:02.881 } 00:34:02.881 ], 00:34:02.881 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:34:02.881 "serial_number": "SPDK2", 00:34:02.881 "subtype": "NVMe" 00:34:02.881 } 00:34:02.881 ] 00:34:02.881 08:30:36 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:34:02.881 08:30:36 -- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:34:02.881 08:30:36 -- target/nvmf_vfio_user.sh@34 -- # aerpid=69647 00:34:02.881 08:30:36 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:34:02.881 08:30:36 -- common/autotest_common.sh@1244 -- # local i=0 00:34:02.881 08:30:36 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:34:02.881 08:30:36 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:34:02.881 08:30:36 -- common/autotest_common.sh@1247 -- # i=1 00:34:02.881 08:30:36 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:34:03.140 08:30:36 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:34:03.140 08:30:36 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:34:03.140 08:30:36 -- common/autotest_common.sh@1247 -- # i=2 00:34:03.140 08:30:36 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:34:03.141 08:30:36 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:34:03.141 08:30:36 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:34:03.141 08:30:36 -- common/autotest_common.sh@1255 -- # return 0 00:34:03.141 08:30:36 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:34:03.141 08:30:36 -- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:34:03.399 Malloc3 00:34:03.399 08:30:36 -- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:34:03.658 08:30:36 -- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:34:03.658 Asynchronous Event Request test 00:34:03.658 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:34:03.658 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:34:03.658 Registering asynchronous event callbacks... 00:34:03.658 Starting namespace attribute notice tests for all controllers... 00:34:03.658 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:34:03.658 aer_cb - Changed Namespace 00:34:03.658 Cleaning up... 00:34:03.945 [ 00:34:03.945 { 00:34:03.945 "allow_any_host": true, 00:34:03.945 "hosts": [], 00:34:03.945 "listen_addresses": [], 00:34:03.945 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:03.945 "subtype": "Discovery" 00:34:03.945 }, 00:34:03.945 { 00:34:03.945 "allow_any_host": true, 00:34:03.945 "hosts": [], 00:34:03.945 "listen_addresses": [ 00:34:03.945 { 00:34:03.945 "adrfam": "IPv4", 00:34:03.945 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:34:03.945 "transport": "VFIOUSER", 00:34:03.945 "trsvcid": "0", 00:34:03.945 "trtype": "VFIOUSER" 00:34:03.945 } 00:34:03.945 ], 00:34:03.945 "max_cntlid": 65519, 00:34:03.945 "max_namespaces": 32, 00:34:03.945 "min_cntlid": 1, 00:34:03.945 "model_number": "SPDK bdev Controller", 00:34:03.945 "namespaces": [ 00:34:03.945 { 00:34:03.945 "bdev_name": "Malloc1", 00:34:03.945 "name": "Malloc1", 00:34:03.945 "nguid": "E959E8613921494B804DEE41C5FB4A7F", 00:34:03.945 "nsid": 1, 00:34:03.945 "uuid": "e959e861-3921-494b-804d-ee41c5fb4a7f" 00:34:03.945 }, 00:34:03.945 { 00:34:03.945 "bdev_name": "Malloc3", 00:34:03.945 "name": "Malloc3", 00:34:03.945 "nguid": "F050DFC520C7423FB7E77B8FE7C1B16B", 00:34:03.945 "nsid": 2, 00:34:03.945 "uuid": "f050dfc5-20c7-423f-b7e7-7b8fe7c1b16b" 00:34:03.945 } 00:34:03.945 ], 00:34:03.945 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:34:03.945 "serial_number": "SPDK1", 00:34:03.945 "subtype": "NVMe" 00:34:03.945 }, 00:34:03.945 { 00:34:03.945 "allow_any_host": true, 00:34:03.945 "hosts": [], 00:34:03.945 "listen_addresses": [ 00:34:03.945 { 00:34:03.945 "adrfam": "IPv4", 00:34:03.945 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:34:03.945 "transport": "VFIOUSER", 00:34:03.945 "trsvcid": "0", 00:34:03.945 "trtype": "VFIOUSER" 00:34:03.945 } 00:34:03.945 ], 00:34:03.945 "max_cntlid": 65519, 00:34:03.945 "max_namespaces": 32, 00:34:03.945 "min_cntlid": 1, 00:34:03.945 "model_number": "SPDK bdev Controller", 00:34:03.945 "namespaces": [ 00:34:03.945 { 00:34:03.945 "bdev_name": "Malloc2", 00:34:03.945 "name": "Malloc2", 00:34:03.945 "nguid": "E756C70D3FD14AD594EBE398F3627155", 00:34:03.945 "nsid": 1, 00:34:03.945 "uuid": "e756c70d-3fd1-4ad5-94eb-e398f3627155" 00:34:03.945 } 00:34:03.945 ], 00:34:03.945 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:34:03.945 "serial_number": "SPDK2", 00:34:03.945 "subtype": "NVMe" 00:34:03.945 } 00:34:03.945 ] 00:34:03.945 08:30:37 -- target/nvmf_vfio_user.sh@44 -- # wait 69647 00:34:03.945 08:30:37 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:34:03.945 08:30:37 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:34:03.945 08:30:37 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:34:03.945 08:30:37 -- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:34:03.945 [2024-04-17 08:30:37.213153] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:03.945 [2024-04-17 08:30:37.213193] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69684 ] 00:34:04.206 [2024-04-17 08:30:37.344427] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:34:04.206 [2024-04-17 08:30:37.350617] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:34:04.206 [2024-04-17 08:30:37.350655] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fcbe1c10000 00:34:04.206 [2024-04-17 08:30:37.351614] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:34:04.206 [2024-04-17 08:30:37.352614] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:34:04.206 [2024-04-17 08:30:37.353621] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:34:04.206 [2024-04-17 08:30:37.354616] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:34:04.206 [2024-04-17 08:30:37.355629] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:34:04.206 [2024-04-17 08:30:37.356630] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:34:04.206 [2024-04-17 08:30:37.357643] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:34:04.206 [2024-04-17 08:30:37.358670] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:34:04.206 [2024-04-17 08:30:37.359650] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:34:04.206 [2024-04-17 08:30:37.359677] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fcbe1c05000 00:34:04.206 [2024-04-17 08:30:37.360808] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:34:04.206 [2024-04-17 08:30:37.375052] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:34:04.206 [2024-04-17 08:30:37.375103] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:34:04.206 [2024-04-17 08:30:37.380200] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:34:04.206 [2024-04-17 08:30:37.380261] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:34:04.206 [2024-04-17 08:30:37.380345] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:34:04.206 [2024-04-17 08:30:37.380367] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:34:04.206 [2024-04-17 08:30:37.380372] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:34:04.206 [2024-04-17 08:30:37.381184] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:34:04.206 [2024-04-17 08:30:37.381203] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:34:04.206 [2024-04-17 08:30:37.381210] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:34:04.206 [2024-04-17 08:30:37.382186] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:34:04.206 [2024-04-17 08:30:37.382207] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:34:04.206 [2024-04-17 08:30:37.382214] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:34:04.206 [2024-04-17 08:30:37.383189] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:34:04.206 [2024-04-17 08:30:37.383204] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:34:04.206 [2024-04-17 08:30:37.384191] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:34:04.206 [2024-04-17 08:30:37.384207] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:34:04.206 [2024-04-17 08:30:37.384211] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:34:04.206 [2024-04-17 08:30:37.384217] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:34:04.206 [2024-04-17 08:30:37.384322] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:34:04.206 [2024-04-17 08:30:37.384333] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:34:04.206 [2024-04-17 08:30:37.384337] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:34:04.206 [2024-04-17 08:30:37.385198] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:34:04.206 [2024-04-17 08:30:37.386197] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:34:04.206 [2024-04-17 08:30:37.387201] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:34:04.206 [2024-04-17 08:30:37.388234] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:34:04.206 [2024-04-17 08:30:37.389210] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:34:04.206 [2024-04-17 08:30:37.389225] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:34:04.206 [2024-04-17 08:30:37.389229] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:34:04.206 [2024-04-17 08:30:37.389249] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:34:04.206 [2024-04-17 08:30:37.389258] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:34:04.206 [2024-04-17 08:30:37.389273] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:34:04.206 [2024-04-17 08:30:37.389278] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:34:04.206 [2024-04-17 08:30:37.389290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:34:04.206 [2024-04-17 08:30:37.397408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:34:04.206 [2024-04-17 08:30:37.397432] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:34:04.206 [2024-04-17 08:30:37.397442] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:34:04.206 [2024-04-17 08:30:37.397445] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:34:04.206 [2024-04-17 08:30:37.397449] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:34:04.206 [2024-04-17 08:30:37.397453] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:34:04.206 [2024-04-17 08:30:37.397456] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:34:04.206 [2024-04-17 08:30:37.397461] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:34:04.206 [2024-04-17 08:30:37.397470] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:34:04.206 [2024-04-17 08:30:37.397482] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:34:04.206 [2024-04-17 08:30:37.405405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:34:04.206 [2024-04-17 08:30:37.405425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:04.206 [2024-04-17 08:30:37.405434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:04.206 [2024-04-17 08:30:37.405441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:04.206 [2024-04-17 08:30:37.405449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:04.206 [2024-04-17 08:30:37.405452] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:34:04.207 [2024-04-17 08:30:37.405463] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:34:04.207 [2024-04-17 08:30:37.405471] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:34:04.207 [2024-04-17 08:30:37.413405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:34:04.207 [2024-04-17 08:30:37.413420] nvme_ctrlr.c:2877:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:34:04.207 [2024-04-17 08:30:37.413425] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:34:04.207 [2024-04-17 08:30:37.413431] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:34:04.207 [2024-04-17 08:30:37.413439] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:34:04.207 [2024-04-17 08:30:37.413448] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:34:04.207 [2024-04-17 08:30:37.421405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:34:04.207 [2024-04-17 08:30:37.421470] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:34:04.207 [2024-04-17 08:30:37.421479] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:34:04.207 [2024-04-17 08:30:37.421486] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:34:04.207 [2024-04-17 08:30:37.421491] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:34:04.207 [2024-04-17 08:30:37.421498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:34:04.207 [2024-04-17 08:30:37.429405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:34:04.207 [2024-04-17 08:30:37.429433] nvme_ctrlr.c:4542:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:34:04.207 [2024-04-17 08:30:37.429444] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:34:04.207 [2024-04-17 08:30:37.429452] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:34:04.207 [2024-04-17 08:30:37.429458] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:34:04.207 [2024-04-17 08:30:37.429462] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:34:04.207 [2024-04-17 08:30:37.429468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:34:04.207 [2024-04-17 08:30:37.437405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:34:04.207 [2024-04-17 08:30:37.437431] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:34:04.207 [2024-04-17 08:30:37.437440] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:34:04.207 [2024-04-17 08:30:37.437447] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:34:04.207 [2024-04-17 08:30:37.437450] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:34:04.207 [2024-04-17 08:30:37.437457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:34:04.207 [2024-04-17 08:30:37.445405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:34:04.207 [2024-04-17 08:30:37.445423] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:34:04.207 [2024-04-17 08:30:37.445429] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:34:04.207 [2024-04-17 08:30:37.445442] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:34:04.207 [2024-04-17 08:30:37.445448] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:34:04.207 [2024-04-17 08:30:37.445452] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:34:04.207 [2024-04-17 08:30:37.445456] nvme_ctrlr.c:2977:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:34:04.207 [2024-04-17 08:30:37.445460] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:34:04.207 [2024-04-17 08:30:37.445464] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:34:04.207 [2024-04-17 08:30:37.445484] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:34:04.207 [2024-04-17 08:30:37.453403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:34:04.207 [2024-04-17 08:30:37.453425] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:34:04.207 [2024-04-17 08:30:37.461403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:34:04.207 [2024-04-17 08:30:37.461427] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:34:04.207 [2024-04-17 08:30:37.469406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:34:04.207 [2024-04-17 08:30:37.469433] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:34:04.207 [2024-04-17 08:30:37.477405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:34:04.207 [2024-04-17 08:30:37.477438] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:34:04.207 [2024-04-17 08:30:37.477443] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:34:04.207 [2024-04-17 08:30:37.477447] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:34:04.207 [2024-04-17 08:30:37.477449] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:34:04.207 [2024-04-17 08:30:37.477455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:34:04.207 [2024-04-17 08:30:37.477462] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:34:04.207 [2024-04-17 08:30:37.477466] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:34:04.207 [2024-04-17 08:30:37.477471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:34:04.207 [2024-04-17 08:30:37.477477] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:34:04.207 [2024-04-17 08:30:37.477481] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:34:04.207 [2024-04-17 08:30:37.477486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:34:04.207 [2024-04-17 08:30:37.477493] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:34:04.207 [2024-04-17 08:30:37.477496] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:34:04.207 [2024-04-17 08:30:37.477501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:34:04.207 ===================================================== 00:34:04.207 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:34:04.207 ===================================================== 00:34:04.207 Controller Capabilities/Features 00:34:04.207 ================================ 00:34:04.207 Vendor ID: 4e58 00:34:04.207 Subsystem Vendor ID: 4e58 00:34:04.207 Serial Number: SPDK2 00:34:04.207 Model Number: SPDK bdev Controller 00:34:04.207 Firmware Version: 24.01.1 00:34:04.207 Recommended Arb Burst: 6 00:34:04.207 IEEE OUI Identifier: 8d 6b 50 00:34:04.207 Multi-path I/O 00:34:04.207 May have multiple subsystem ports: Yes 00:34:04.207 May have multiple controllers: Yes 00:34:04.207 Associated with SR-IOV VF: No 00:34:04.207 Max Data Transfer Size: 131072 00:34:04.207 Max Number of Namespaces: 32 00:34:04.207 Max Number of I/O Queues: 127 00:34:04.207 NVMe Specification Version (VS): 1.3 00:34:04.207 NVMe Specification Version (Identify): 1.3 00:34:04.207 Maximum Queue Entries: 256 00:34:04.207 Contiguous Queues Required: Yes 00:34:04.207 Arbitration Mechanisms Supported 00:34:04.207 Weighted Round Robin: Not Supported 00:34:04.207 Vendor Specific: Not Supported 00:34:04.207 Reset Timeout: 15000 ms 00:34:04.207 Doorbell Stride: 4 bytes 00:34:04.207 NVM Subsystem Reset: Not Supported 00:34:04.207 Command Sets Supported 00:34:04.207 NVM Command Set: Supported 00:34:04.207 Boot Partition: Not Supported 00:34:04.207 Memory Page Size Minimum: 4096 bytes 00:34:04.207 Memory Page Size Maximum: 4096 bytes 00:34:04.207 Persistent Memory Region: Not Supported 00:34:04.207 Optional Asynchronous Events Supported 00:34:04.207 Namespace Attribute Notices: Supported 00:34:04.207 Firmware Activation Notices: Not Supported 00:34:04.207 ANA Change Notices: Not Supported 00:34:04.207 PLE Aggregate Log Change Notices: Not Supported 00:34:04.207 LBA Status Info Alert Notices: Not Supported 00:34:04.207 EGE Aggregate Log Change Notices: Not Supported 00:34:04.207 Normal NVM Subsystem Shutdown event: Not Supported 00:34:04.207 Zone Descriptor Change Notices: Not Supported 00:34:04.207 Discovery Log Change Notices: Not Supported 00:34:04.207 Controller Attributes 00:34:04.207 128-bit Host Identifier: Supported 00:34:04.207 Non-Operational Permissive Mode: Not Supported 00:34:04.207 NVM Sets: Not Supported 00:34:04.207 Read Recovery Levels: Not Supported 00:34:04.208 Endurance Groups: Not Supported 00:34:04.208 Predictable Latency Mode: Not Supported 00:34:04.208 Traffic Based Keep ALive: Not Supported 00:34:04.208 Namespace Granularity: Not Supported 00:34:04.208 SQ Associations: Not Supported 00:34:04.208 UUID List: Not Supported 00:34:04.208 Multi-Domain Subsystem: Not Supported 00:34:04.208 Fixed Capacity Management: Not Supported 00:34:04.208 Variable Capacity Management: Not Supported 00:34:04.208 Delete Endurance Group: Not Supported 00:34:04.208 Delete NVM Set: Not Supported 00:34:04.208 Extended LBA Formats Supported: Not Supported 00:34:04.208 Flexible Data Placement Supported: Not Supported 00:34:04.208 00:34:04.208 Controller Memory Buffer Support 00:34:04.208 ================================ 00:34:04.208 Supported: No 00:34:04.208 00:34:04.208 Persistent Memory Region Support 00:34:04.208 ================================ 00:34:04.208 Supported: No 00:34:04.208 00:34:04.208 Admin Command Set Attributes 00:34:04.208 ============================ 00:34:04.208 Security Send/Receive: Not Supported 00:34:04.208 Format NVM: Not Supported 00:34:04.208 Firmware Activate/Download: Not Supported 00:34:04.208 Namespace Management: Not Supported 00:34:04.208 Device Self-Test: Not Supported 00:34:04.208 Directives: Not Supported 00:34:04.208 NVMe-MI: Not Supported 00:34:04.208 Virtualization Management: Not Supported 00:34:04.208 Doorbell Buffer Config: Not Supported 00:34:04.208 Get LBA Status Capability: Not Supported 00:34:04.208 Command & Feature Lockdown Capability: Not Supported 00:34:04.208 Abort Command Limit: 4 00:34:04.208 Async Event Request Limit: 4 00:34:04.208 Number of Firmware Slots: N/A 00:34:04.208 Firmware Slot 1 Read-Only: N/A 00:34:04.208 Firmware Activation Without Reset: N/A 00:34:04.208 Multiple Update Detection Support: N/A 00:34:04.208 Firmware Update Granularity: No Information Provided 00:34:04.208 Per-Namespace SMART Log: No 00:34:04.208 Asymmetric Namespace Access Log Page: Not Supported 00:34:04.208 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:34:04.208 Command Effects Log Page: Supported 00:34:04.208 Get Log Page Extended Data: Supported 00:34:04.208 Telemetry Log Pages: Not Supported 00:34:04.208 Persistent Event Log Pages: Not Supported 00:34:04.208 Supported Log Pages Log Page: May Support 00:34:04.208 Commands Supported & Effects Log Page: Not Supported 00:34:04.208 Feature Identifiers & Effects Log Page:May Support 00:34:04.208 NVMe-MI Commands & Effects Log Page: May Support 00:34:04.208 Data Area 4 for Telemetry Log: Not Supported 00:34:04.208 Error Log Page Entries Supported: 128 00:34:04.208 Keep Alive: Supported 00:34:04.208 Keep Alive Granularity: 10000 ms 00:34:04.208 00:34:04.208 NVM Command Set Attributes 00:34:04.208 ========================== 00:34:04.208 Submission Queue Entry Size 00:34:04.208 Max: 64 00:34:04.208 Min: 64 00:34:04.208 Completion Queue Entry Size 00:34:04.208 Max: 16 00:34:04.208 Min: 16 00:34:04.208 Number of Namespaces: 32 00:34:04.208 Compare Command: Supported 00:34:04.208 Write Uncorrectable Command: Not Supported 00:34:04.208 Dataset Management Command: Supported 00:34:04.208 Write Zeroes Command: Supported 00:34:04.208 Set Features Save Field: Not Supported 00:34:04.208 Reservations: Not Supported 00:34:04.208 Timestamp: Not Supported 00:34:04.208 Copy: Supported 00:34:04.208 Volatile Write Cache: Present 00:34:04.208 Atomic Write Unit (Normal): 1 00:34:04.208 Atomic Write Unit (PFail): 1 00:34:04.208 Atomic Compare & Write Unit: 1 00:34:04.208 Fused Compare & Write: Supported 00:34:04.208 Scatter-Gather List 00:34:04.208 SGL Command Set: Supported (Dword aligned) 00:34:04.208 SGL Keyed: Not Supported 00:34:04.208 SGL Bit Bucket Descriptor: Not Supported 00:34:04.208 SGL Metadata Pointer: Not Supported 00:34:04.208 Oversized SGL: Not Supported 00:34:04.208 SGL Metadata Address: Not Supported 00:34:04.208 SGL Offset: Not Supported 00:34:04.208 Transport SGL Data Block: Not Supported 00:34:04.208 Replay Protected Memory Block: Not Supported 00:34:04.208 00:34:04.208 Firmware Slot Information 00:34:04.208 ========================= 00:34:04.208 Active slot: 1 00:34:04.208 Slot 1 Firmware Revision: 24.01.1 00:34:04.208 00:34:04.208 00:34:04.208 Commands Supported and Effects 00:34:04.208 ============================== 00:34:04.208 Admin Commands 00:34:04.208 -------------- 00:34:04.208 Get Log Page (02h): Supported 00:34:04.208 Identify (06h): Supported 00:34:04.208 Abort (08h): Supported 00:34:04.208 Set Features (09h): Supported 00:34:04.208 Get Features (0Ah): Supported 00:34:04.208 Asynchronous Event Request (0Ch): Supported 00:34:04.208 Keep Alive (18h): Supported 00:34:04.208 I/O Commands 00:34:04.208 ------------ 00:34:04.208 Flush (00h): Supported LBA-Change 00:34:04.208 Write (01h): Supported LBA-Change 00:34:04.208 Read (02h): Supported 00:34:04.208 Compare (05h): Supported 00:34:04.208 Write Zeroes (08h): Supported LBA-Change 00:34:04.208 Dataset Management (09h): Supported LBA-Change 00:34:04.208 Copy (19h): Supported LBA-Change 00:34:04.208 Unknown (79h): Supported LBA-Change 00:34:04.208 Unknown (7Ah): Supported 00:34:04.208 00:34:04.208 Error Log 00:34:04.208 ========= 00:34:04.208 00:34:04.208 Arbitration 00:34:04.208 =========== 00:34:04.208 Arbitration Burst: 1 00:34:04.208 00:34:04.208 Power Management 00:34:04.208 ================ 00:34:04.208 Number of Power States: 1 00:34:04.208 Current Power State: Power State #0 00:34:04.208 Power State #0: 00:34:04.208 Max Power: 0.00 W 00:34:04.208 Non-Operational State: Operational 00:34:04.208 Entry Latency: Not Reported 00:34:04.208 Exit Latency: Not Reported 00:34:04.208 Relative Read Throughput: 0 00:34:04.208 Relative Read Latency: 0 00:34:04.208 Relative Write Throughput: 0 00:34:04.208 Relative Write Latency: 0 00:34:04.208 Idle Power: Not Reported 00:34:04.208 Active Power: Not Reported 00:34:04.208 Non-Operational Permissive Mode: Not Supported 00:34:04.208 00:34:04.208 Health Information 00:34:04.208 ================== 00:34:04.208 Critical Warnings: 00:34:04.208 Available Spare Space: OK 00:34:04.208 Temperature: OK 00:34:04.208 Device Reliability: OK 00:34:04.208 Read Only: No 00:34:04.208 Volatile Memory Backup: OK 00:34:04.208 Current Temperature: 0 Kelvin[2024-04-17 08:30:37.485404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:34:04.208 [2024-04-17 08:30:37.485436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:34:04.208 [2024-04-17 08:30:37.485445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:34:04.208 [2024-04-17 08:30:37.485451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:34:04.208 [2024-04-17 08:30:37.485563] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:34:04.208 [2024-04-17 08:30:37.493403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:34:04.208 [2024-04-17 08:30:37.493451] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:34:04.208 [2024-04-17 08:30:37.493460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.208 [2024-04-17 08:30:37.493466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.208 [2024-04-17 08:30:37.493471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.208 [2024-04-17 08:30:37.493476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.208 [2024-04-17 08:30:37.493551] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:34:04.208 [2024-04-17 08:30:37.493564] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:34:04.208 [2024-04-17 08:30:37.494580] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:34:04.208 [2024-04-17 08:30:37.494595] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:34:04.208 [2024-04-17 08:30:37.495535] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:34:04.208 [2024-04-17 08:30:37.495553] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:34:04.208 [2024-04-17 08:30:37.495696] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:34:04.208 [2024-04-17 08:30:37.496925] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:34:04.467 (-273 Celsius) 00:34:04.467 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:34:04.467 Available Spare: 0% 00:34:04.467 Available Spare Threshold: 0% 00:34:04.467 Life Percentage Used: 0% 00:34:04.467 Data Units Read: 0 00:34:04.467 Data Units Written: 0 00:34:04.467 Host Read Commands: 0 00:34:04.467 Host Write Commands: 0 00:34:04.467 Controller Busy Time: 0 minutes 00:34:04.467 Power Cycles: 0 00:34:04.467 Power On Hours: 0 hours 00:34:04.467 Unsafe Shutdowns: 0 00:34:04.467 Unrecoverable Media Errors: 0 00:34:04.467 Lifetime Error Log Entries: 0 00:34:04.467 Warning Temperature Time: 0 minutes 00:34:04.467 Critical Temperature Time: 0 minutes 00:34:04.467 00:34:04.467 Number of Queues 00:34:04.467 ================ 00:34:04.467 Number of I/O Submission Queues: 127 00:34:04.467 Number of I/O Completion Queues: 127 00:34:04.467 00:34:04.467 Active Namespaces 00:34:04.467 ================= 00:34:04.467 Namespace ID:1 00:34:04.467 Error Recovery Timeout: Unlimited 00:34:04.467 Command Set Identifier: NVM (00h) 00:34:04.467 Deallocate: Supported 00:34:04.467 Deallocated/Unwritten Error: Not Supported 00:34:04.467 Deallocated Read Value: Unknown 00:34:04.467 Deallocate in Write Zeroes: Not Supported 00:34:04.467 Deallocated Guard Field: 0xFFFF 00:34:04.467 Flush: Supported 00:34:04.467 Reservation: Supported 00:34:04.467 Namespace Sharing Capabilities: Multiple Controllers 00:34:04.467 Size (in LBAs): 131072 (0GiB) 00:34:04.467 Capacity (in LBAs): 131072 (0GiB) 00:34:04.467 Utilization (in LBAs): 131072 (0GiB) 00:34:04.467 NGUID: E756C70D3FD14AD594EBE398F3627155 00:34:04.467 UUID: e756c70d-3fd1-4ad5-94eb-e398f3627155 00:34:04.467 Thin Provisioning: Not Supported 00:34:04.467 Per-NS Atomic Units: Yes 00:34:04.467 Atomic Boundary Size (Normal): 0 00:34:04.467 Atomic Boundary Size (PFail): 0 00:34:04.467 Atomic Boundary Offset: 0 00:34:04.467 Maximum Single Source Range Length: 65535 00:34:04.467 Maximum Copy Length: 65535 00:34:04.467 Maximum Source Range Count: 1 00:34:04.467 NGUID/EUI64 Never Reused: No 00:34:04.467 Namespace Write Protected: No 00:34:04.467 Number of LBA Formats: 1 00:34:04.467 Current LBA Format: LBA Format #00 00:34:04.467 LBA Format #00: Data Size: 512 Metadata Size: 0 00:34:04.467 00:34:04.467 08:30:37 -- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:34:09.738 Initializing NVMe Controllers 00:34:09.738 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:34:09.738 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:34:09.738 Initialization complete. Launching workers. 00:34:09.738 ======================================================== 00:34:09.738 Latency(us) 00:34:09.738 Device Information : IOPS MiB/s Average min max 00:34:09.738 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 36417.77 142.26 3514.22 1181.24 9631.41 00:34:09.738 ======================================================== 00:34:09.738 Total : 36417.77 142.26 3514.22 1181.24 9631.41 00:34:09.738 00:34:09.738 08:30:42 -- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:34:15.012 Initializing NVMe Controllers 00:34:15.012 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:34:15.013 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:34:15.013 Initialization complete. Launching workers. 00:34:15.013 ======================================================== 00:34:15.013 Latency(us) 00:34:15.013 Device Information : IOPS MiB/s Average min max 00:34:15.013 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35869.31 140.11 3569.17 1173.58 8907.60 00:34:15.013 ======================================================== 00:34:15.013 Total : 35869.31 140.11 3569.17 1173.58 8907.60 00:34:15.013 00:34:15.013 08:30:48 -- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:34:20.274 Initializing NVMe Controllers 00:34:20.275 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:34:20.275 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:34:20.275 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:34:20.275 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:34:20.275 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:34:20.275 Initialization complete. Launching workers. 00:34:20.275 Starting thread on core 2 00:34:20.275 Starting thread on core 3 00:34:20.275 Starting thread on core 1 00:34:20.533 08:30:53 -- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:34:23.818 Initializing NVMe Controllers 00:34:23.818 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:34:23.818 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:34:23.818 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:34:23.818 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:34:23.818 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:34:23.818 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:34:23.818 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:34:23.818 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:34:23.818 Initialization complete. Launching workers. 00:34:23.818 Starting thread on core 1 with urgent priority queue 00:34:23.818 Starting thread on core 2 with urgent priority queue 00:34:23.818 Starting thread on core 3 with urgent priority queue 00:34:23.818 Starting thread on core 0 with urgent priority queue 00:34:23.818 SPDK bdev Controller (SPDK2 ) core 0: 5680.33 IO/s 17.60 secs/100000 ios 00:34:23.818 SPDK bdev Controller (SPDK2 ) core 1: 6047.33 IO/s 16.54 secs/100000 ios 00:34:23.818 SPDK bdev Controller (SPDK2 ) core 2: 6203.00 IO/s 16.12 secs/100000 ios 00:34:23.818 SPDK bdev Controller (SPDK2 ) core 3: 5857.33 IO/s 17.07 secs/100000 ios 00:34:23.818 ======================================================== 00:34:23.818 00:34:23.818 08:30:56 -- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:34:24.077 Initializing NVMe Controllers 00:34:24.077 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:34:24.077 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:34:24.077 Namespace ID: 1 size: 0GB 00:34:24.077 Initialization complete. 00:34:24.077 INFO: using host memory buffer for IO 00:34:24.077 Hello world! 00:34:24.077 08:30:57 -- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:34:25.463 Initializing NVMe Controllers 00:34:25.463 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:34:25.463 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:34:25.463 Initialization complete. Launching workers. 00:34:25.463 submit (in ns) avg, min, max = 8318.2, 3099.6, 4029269.0 00:34:25.463 complete (in ns) avg, min, max = 32048.0, 1690.8, 4033888.2 00:34:25.463 00:34:25.463 Submit histogram 00:34:25.463 ================ 00:34:25.463 Range in us Cumulative Count 00:34:25.463 3.088 - 3.102: 0.0101% ( 1) 00:34:25.463 3.102 - 3.116: 0.0405% ( 3) 00:34:25.463 3.116 - 3.130: 0.0608% ( 2) 00:34:25.463 3.130 - 3.144: 0.0911% ( 3) 00:34:25.463 3.144 - 3.158: 0.1114% ( 2) 00:34:25.463 3.158 - 3.172: 0.1215% ( 1) 00:34:25.463 3.186 - 3.200: 0.1519% ( 3) 00:34:25.463 3.200 - 3.214: 0.1722% ( 2) 00:34:25.463 3.214 - 3.228: 0.1823% ( 1) 00:34:25.463 3.228 - 3.242: 0.3140% ( 13) 00:34:25.463 3.242 - 3.256: 0.8001% ( 48) 00:34:25.463 3.256 - 3.270: 1.8128% ( 100) 00:34:25.463 3.270 - 3.284: 3.0990% ( 127) 00:34:25.463 3.284 - 3.298: 4.6790% ( 156) 00:34:25.463 3.298 - 3.312: 6.0664% ( 137) 00:34:25.463 3.312 - 3.326: 7.4337% ( 135) 00:34:25.463 3.326 - 3.340: 8.4262% ( 98) 00:34:25.463 3.340 - 3.354: 8.9427% ( 51) 00:34:25.463 3.354 - 3.368: 9.2769% ( 33) 00:34:25.464 3.368 - 3.382: 9.6010% ( 32) 00:34:25.464 3.382 - 3.396: 9.7833% ( 18) 00:34:25.464 3.396 - 3.410: 9.8947% ( 11) 00:34:25.464 3.410 - 3.424: 10.0263% ( 13) 00:34:25.464 3.424 - 3.438: 10.1175% ( 9) 00:34:25.464 3.438 - 3.452: 10.1479% ( 3) 00:34:25.464 3.452 - 3.466: 10.1884% ( 4) 00:34:25.464 3.466 - 3.479: 10.2896% ( 10) 00:34:25.464 3.479 - 3.493: 10.3403% ( 5) 00:34:25.464 3.493 - 3.507: 10.4011% ( 6) 00:34:25.464 3.507 - 3.521: 10.4618% ( 6) 00:34:25.464 3.521 - 3.535: 10.5530% ( 9) 00:34:25.464 3.535 - 3.549: 10.7150% ( 16) 00:34:25.464 3.549 - 3.563: 11.0391% ( 32) 00:34:25.464 3.563 - 3.577: 11.5151% ( 47) 00:34:25.464 3.577 - 3.605: 14.4420% ( 289) 00:34:25.464 3.605 - 3.633: 20.0628% ( 555) 00:34:25.464 3.633 - 3.661: 25.8760% ( 574) 00:34:25.464 3.661 - 3.689: 30.3930% ( 446) 00:34:25.464 3.689 - 3.717: 32.3881% ( 197) 00:34:25.464 3.717 - 3.745: 33.5224% ( 112) 00:34:25.464 3.745 - 3.773: 34.0288% ( 50) 00:34:25.464 3.773 - 3.801: 37.2696% ( 320) 00:34:25.464 3.801 - 3.829: 46.1110% ( 873) 00:34:25.464 3.829 - 3.857: 58.0818% ( 1182) 00:34:25.464 3.857 - 3.885: 68.7361% ( 1052) 00:34:25.464 3.885 - 3.913: 75.7039% ( 688) 00:34:25.464 3.913 - 3.941: 79.8967% ( 414) 00:34:25.464 3.941 - 3.969: 82.5704% ( 264) 00:34:25.464 3.969 - 3.997: 84.4845% ( 189) 00:34:25.464 3.997 - 4.024: 85.4973% ( 100) 00:34:25.464 4.024 - 4.052: 86.2163% ( 71) 00:34:25.464 4.052 - 4.080: 87.0873% ( 86) 00:34:25.464 4.080 - 4.108: 89.1736% ( 206) 00:34:25.464 4.108 - 4.136: 91.7257% ( 252) 00:34:25.464 4.136 - 4.164: 94.1665% ( 241) 00:34:25.464 4.164 - 4.192: 96.3439% ( 215) 00:34:25.464 4.192 - 4.220: 97.6808% ( 132) 00:34:25.464 4.220 - 4.248: 98.3593% ( 67) 00:34:25.464 4.248 - 4.276: 98.7037% ( 34) 00:34:25.464 4.276 - 4.304: 98.8758% ( 17) 00:34:25.464 4.304 - 4.332: 98.9467% ( 7) 00:34:25.464 4.332 - 4.360: 98.9670% ( 2) 00:34:25.464 4.360 - 4.388: 99.0379% ( 7) 00:34:25.464 4.416 - 4.444: 99.0581% ( 2) 00:34:25.464 4.444 - 4.472: 99.0885% ( 3) 00:34:25.464 4.472 - 4.500: 99.1088% ( 2) 00:34:25.464 4.500 - 4.528: 99.1290% ( 2) 00:34:25.464 4.555 - 4.583: 99.1392% ( 1) 00:34:25.464 4.583 - 4.611: 99.1695% ( 3) 00:34:25.464 4.667 - 4.695: 99.1898% ( 2) 00:34:25.464 4.723 - 4.751: 99.2100% ( 2) 00:34:25.464 4.751 - 4.779: 99.2303% ( 2) 00:34:25.464 4.835 - 4.863: 99.2404% ( 1) 00:34:25.464 4.891 - 4.919: 99.2506% ( 1) 00:34:25.464 5.031 - 5.059: 99.2607% ( 1) 00:34:25.464 5.170 - 5.198: 99.2708% ( 1) 00:34:25.464 5.254 - 5.282: 99.2809% ( 1) 00:34:25.464 7.602 - 7.658: 99.2911% ( 1) 00:34:25.464 7.993 - 8.049: 99.3012% ( 1) 00:34:25.464 8.217 - 8.272: 99.3113% ( 1) 00:34:25.464 8.440 - 8.496: 99.3316% ( 2) 00:34:25.464 8.496 - 8.552: 99.3417% ( 1) 00:34:25.464 8.664 - 8.720: 99.3518% ( 1) 00:34:25.464 8.720 - 8.776: 99.3822% ( 3) 00:34:25.464 8.943 - 8.999: 99.3923% ( 1) 00:34:25.464 8.999 - 9.055: 99.4025% ( 1) 00:34:25.464 9.167 - 9.223: 99.4126% ( 1) 00:34:25.464 9.223 - 9.279: 99.4329% ( 2) 00:34:25.464 9.279 - 9.334: 99.4632% ( 3) 00:34:25.464 9.334 - 9.390: 99.4936% ( 3) 00:34:25.464 9.390 - 9.446: 99.5037% ( 1) 00:34:25.464 9.446 - 9.502: 99.5139% ( 1) 00:34:25.464 9.558 - 9.614: 99.5240% ( 1) 00:34:25.464 9.670 - 9.726: 99.5341% ( 1) 00:34:25.464 9.782 - 9.838: 99.5443% ( 1) 00:34:25.464 9.838 - 9.893: 99.5544% ( 1) 00:34:25.464 9.893 - 9.949: 99.5645% ( 1) 00:34:25.464 10.005 - 10.061: 99.5746% ( 1) 00:34:25.464 10.061 - 10.117: 99.5949% ( 2) 00:34:25.464 10.117 - 10.173: 99.6050% ( 1) 00:34:25.464 10.285 - 10.341: 99.6152% ( 1) 00:34:25.464 10.341 - 10.397: 99.6455% ( 3) 00:34:25.464 10.397 - 10.452: 99.6557% ( 1) 00:34:25.464 10.676 - 10.732: 99.6658% ( 1) 00:34:25.464 10.788 - 10.844: 99.6759% ( 1) 00:34:25.464 10.844 - 10.900: 99.6860% ( 1) 00:34:25.464 10.955 - 11.011: 99.6962% ( 1) 00:34:25.464 11.403 - 11.459: 99.7063% ( 1) 00:34:25.464 12.521 - 12.576: 99.7164% ( 1) 00:34:25.464 14.533 - 14.645: 99.7266% ( 1) 00:34:25.464 14.756 - 14.868: 99.7367% ( 1) 00:34:25.464 14.868 - 14.980: 99.7671% ( 3) 00:34:25.464 14.980 - 15.092: 99.7772% ( 1) 00:34:25.464 15.427 - 15.539: 99.7974% ( 2) 00:34:25.464 17.216 - 17.328: 99.8076% ( 1) 00:34:25.464 18.781 - 18.893: 99.8177% ( 1) 00:34:25.464 19.340 - 19.452: 99.8380% ( 2) 00:34:25.464 19.675 - 19.787: 99.8481% ( 1) 00:34:25.464 19.899 - 20.010: 99.8582% ( 1) 00:34:25.464 20.010 - 20.122: 99.8683% ( 1) 00:34:25.464 20.234 - 20.346: 99.8785% ( 1) 00:34:25.464 28.842 - 29.066: 99.8886% ( 1) 00:34:25.464 3977.949 - 4006.568: 99.8987% ( 1) 00:34:25.464 4006.568 - 4035.186: 100.0000% ( 10) 00:34:25.464 00:34:25.464 Complete histogram 00:34:25.464 ================== 00:34:25.464 Range in us Cumulative Count 00:34:25.464 1.691 - 1.698: 0.0101% ( 1) 00:34:25.464 1.712 - 1.719: 0.0203% ( 1) 00:34:25.464 1.719 - 1.726: 0.0405% ( 2) 00:34:25.464 1.726 - 1.733: 0.1215% ( 8) 00:34:25.464 1.733 - 1.740: 0.1823% ( 6) 00:34:25.464 1.740 - 1.747: 0.2228% ( 4) 00:34:25.464 1.747 - 1.754: 0.2329% ( 1) 00:34:25.464 1.768 - 1.775: 0.2633% ( 3) 00:34:25.464 1.775 - 1.782: 0.4152% ( 15) 00:34:25.464 1.782 - 1.789: 0.7697% ( 35) 00:34:25.464 1.789 - 1.803: 1.2153% ( 44) 00:34:25.464 1.803 - 1.817: 1.5799% ( 36) 00:34:25.464 1.817 - 1.831: 4.1118% ( 250) 00:34:25.464 1.831 - 1.845: 9.8238% ( 564) 00:34:25.464 1.845 - 1.859: 11.3632% ( 152) 00:34:25.464 1.859 - 1.872: 12.2645% ( 89) 00:34:25.464 1.872 - 1.886: 12.5177% ( 25) 00:34:25.464 1.886 - 1.900: 12.7101% ( 19) 00:34:25.464 1.900 - 1.914: 12.8317% ( 12) 00:34:25.464 1.914 - 1.928: 12.9026% ( 7) 00:34:25.464 1.928 - 1.942: 12.9127% ( 1) 00:34:25.464 1.956 - 1.970: 12.9431% ( 3) 00:34:25.464 1.998 - 2.012: 13.0747% ( 13) 00:34:25.464 2.012 - 2.026: 14.5129% ( 142) 00:34:25.464 2.026 - 2.040: 16.0624% ( 153) 00:34:25.464 2.040 - 2.054: 16.7713% ( 70) 00:34:25.464 2.054 - 2.068: 22.6048% ( 576) 00:34:25.464 2.068 - 2.082: 34.0085% ( 1126) 00:34:25.464 2.082 - 2.096: 37.7152% ( 366) 00:34:25.464 2.096 - 2.110: 39.6698% ( 193) 00:34:25.464 2.110 - 2.124: 40.3788% ( 70) 00:34:25.464 2.124 - 2.138: 40.6421% ( 26) 00:34:25.464 2.138 - 2.152: 41.0674% ( 42) 00:34:25.464 2.152 - 2.166: 43.4171% ( 232) 00:34:25.464 2.166 - 2.180: 46.6984% ( 324) 00:34:25.464 2.180 - 2.194: 48.2074% ( 149) 00:34:25.464 2.194 - 2.208: 55.2866% ( 699) 00:34:25.464 2.208 - 2.222: 77.0103% ( 2145) 00:34:25.464 2.222 - 2.236: 89.1837% ( 1202) 00:34:25.464 2.236 - 2.250: 93.0120% ( 378) 00:34:25.464 2.250 - 2.264: 95.1995% ( 216) 00:34:25.464 2.264 - 2.278: 96.1414% ( 93) 00:34:25.464 2.278 - 2.292: 97.0326% ( 88) 00:34:25.464 2.292 - 2.306: 97.8327% ( 79) 00:34:25.464 2.306 - 2.320: 98.0960% ( 26) 00:34:25.464 2.320 - 2.334: 98.1872% ( 9) 00:34:25.464 2.334 - 2.348: 98.2277% ( 4) 00:34:25.464 2.348 - 2.362: 98.2783% ( 5) 00:34:25.464 2.362 - 2.376: 98.2986% ( 2) 00:34:25.464 2.376 - 2.390: 98.3188% ( 2) 00:34:25.464 2.390 - 2.403: 98.3492% ( 3) 00:34:25.464 2.403 - 2.417: 98.3796% ( 3) 00:34:25.464 2.417 - 2.431: 98.3897% ( 1) 00:34:25.464 2.431 - 2.445: 98.4201% ( 3) 00:34:25.464 2.445 - 2.459: 98.4403% ( 2) 00:34:25.464 2.459 - 2.473: 98.4707% ( 3) 00:34:25.464 2.473 - 2.487: 98.5112% ( 4) 00:34:25.464 2.501 - 2.515: 98.5315% ( 2) 00:34:25.464 2.515 - 2.529: 98.5720% ( 4) 00:34:25.464 2.529 - 2.543: 98.6125% ( 4) 00:34:25.464 2.543 - 2.557: 98.6328% ( 2) 00:34:25.464 2.557 - 2.571: 98.6530% ( 2) 00:34:25.464 2.571 - 2.585: 98.6632% ( 1) 00:34:25.464 2.613 - 2.627: 98.6733% ( 1) 00:34:25.464 2.627 - 2.641: 98.6834% ( 1) 00:34:25.464 2.641 - 2.655: 98.7442% ( 6) 00:34:25.464 2.669 - 2.683: 98.7543% ( 1) 00:34:25.464 2.697 - 2.711: 98.7948% ( 4) 00:34:25.464 2.711 - 2.725: 98.8049% ( 1) 00:34:25.464 2.725 - 2.739: 98.8151% ( 1) 00:34:25.464 2.753 - 2.767: 98.8353% ( 2) 00:34:25.464 2.795 - 2.809: 98.8455% ( 1) 00:34:25.464 2.851 - 2.865: 98.8556% ( 1) 00:34:25.464 2.865 - 2.879: 98.8657% ( 1) 00:34:25.464 2.879 - 2.893: 98.8758% ( 1) 00:34:25.464 2.921 - 2.934: 98.8860% ( 1) 00:34:25.464 2.934 - 2.948: 98.8961% ( 1) 00:34:25.464 2.962 - 2.976: 98.9062% ( 1) 00:34:25.464 2.990 - 3.004: 98.9163% ( 1) 00:34:25.464 3.046 - 3.060: 98.9265% ( 1) 00:34:25.464 3.116 - 3.130: 98.9366% ( 1) 00:34:25.464 3.242 - 3.256: 98.9467% ( 1) 00:34:25.464 3.479 - 3.493: 98.9569% ( 1) 00:34:25.464 3.507 - 3.521: 98.9670% ( 1) 00:34:25.465 3.563 - 3.577: 98.9771% ( 1) 00:34:25.465 3.745 - 3.773: 98.9872% ( 1) 00:34:25.465 4.416 - 4.444: 98.9974% ( 1) 00:34:25.465 4.555 - 4.583: 99.0075% ( 1) 00:34:25.465 6.428 - 6.456: 99.0176% ( 1) 00:34:25.465 7.210 - 7.266: 99.0277% ( 1) 00:34:25.465 7.378 - 7.434: 99.0379% ( 1) 00:34:25.465 7.434 - 7.490: 99.0581% ( 2) 00:34:25.465 7.937 - 7.993: 99.0784% ( 2) 00:34:25.465 8.105 - 8.161: 99.0885% ( 1) 00:34:25.465 8.161 - 8.217: 99.0986% ( 1) 00:34:25.465 8.384 - 8.440: 99.1088% ( 1) 00:34:25.465 8.608 - 8.664: 99.1189% ( 1) 00:34:25.465 8.664 - 8.720: 99.1290% ( 1) 00:34:25.465 8.887 - 8.943: 99.1392% ( 1) 00:34:25.465 9.167 - 9.223: 99.1493% ( 1) 00:34:25.465 9.334 - 9.390: 99.1594% ( 1) 00:34:25.465 9.558 - 9.614: 99.1695% ( 1) 00:34:25.465 10.788 - 10.844: 99.1898% ( 2) 00:34:25.465 13.024 - 13.079: 99.1999% ( 1) 00:34:25.465 13.583 - 13.638: 99.2100% ( 1) 00:34:25.465 18.222 - 18.334: 99.2202% ( 1) 00:34:25.465 18.557 - 18.669: 99.2303% ( 1) 00:34:25.465 29.513 - 29.736: 99.2404% ( 1) 00:34:25.465 1008.797 - 1015.951: 99.2506% ( 1) 00:34:25.465 3019.235 - 3033.544: 99.2607% ( 1) 00:34:25.465 3033.544 - 3047.853: 99.2708% ( 1) 00:34:25.465 3062.162 - 3076.472: 99.2809% ( 1) 00:34:25.465 3949.331 - 3977.949: 99.2911% ( 1) 00:34:25.465 3977.949 - 4006.568: 99.3620% ( 7) 00:34:25.465 4006.568 - 4035.186: 100.0000% ( 63) 00:34:25.465 00:34:25.465 08:30:58 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:34:25.465 08:30:58 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:34:25.465 08:30:58 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:34:25.465 08:30:58 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:34:25.465 08:30:58 -- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:34:25.724 [ 00:34:25.724 { 00:34:25.724 "allow_any_host": true, 00:34:25.724 "hosts": [], 00:34:25.724 "listen_addresses": [], 00:34:25.724 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:25.724 "subtype": "Discovery" 00:34:25.724 }, 00:34:25.724 { 00:34:25.724 "allow_any_host": true, 00:34:25.724 "hosts": [], 00:34:25.724 "listen_addresses": [ 00:34:25.724 { 00:34:25.724 "adrfam": "IPv4", 00:34:25.724 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:34:25.724 "transport": "VFIOUSER", 00:34:25.724 "trsvcid": "0", 00:34:25.725 "trtype": "VFIOUSER" 00:34:25.725 } 00:34:25.725 ], 00:34:25.725 "max_cntlid": 65519, 00:34:25.725 "max_namespaces": 32, 00:34:25.725 "min_cntlid": 1, 00:34:25.725 "model_number": "SPDK bdev Controller", 00:34:25.725 "namespaces": [ 00:34:25.725 { 00:34:25.725 "bdev_name": "Malloc1", 00:34:25.725 "name": "Malloc1", 00:34:25.725 "nguid": "E959E8613921494B804DEE41C5FB4A7F", 00:34:25.725 "nsid": 1, 00:34:25.725 "uuid": "e959e861-3921-494b-804d-ee41c5fb4a7f" 00:34:25.725 }, 00:34:25.725 { 00:34:25.725 "bdev_name": "Malloc3", 00:34:25.725 "name": "Malloc3", 00:34:25.725 "nguid": "F050DFC520C7423FB7E77B8FE7C1B16B", 00:34:25.725 "nsid": 2, 00:34:25.725 "uuid": "f050dfc5-20c7-423f-b7e7-7b8fe7c1b16b" 00:34:25.725 } 00:34:25.725 ], 00:34:25.725 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:34:25.725 "serial_number": "SPDK1", 00:34:25.725 "subtype": "NVMe" 00:34:25.725 }, 00:34:25.725 { 00:34:25.725 "allow_any_host": true, 00:34:25.725 "hosts": [], 00:34:25.725 "listen_addresses": [ 00:34:25.725 { 00:34:25.725 "adrfam": "IPv4", 00:34:25.725 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:34:25.725 "transport": "VFIOUSER", 00:34:25.725 "trsvcid": "0", 00:34:25.725 "trtype": "VFIOUSER" 00:34:25.725 } 00:34:25.725 ], 00:34:25.725 "max_cntlid": 65519, 00:34:25.725 "max_namespaces": 32, 00:34:25.725 "min_cntlid": 1, 00:34:25.725 "model_number": "SPDK bdev Controller", 00:34:25.725 "namespaces": [ 00:34:25.725 { 00:34:25.725 "bdev_name": "Malloc2", 00:34:25.725 "name": "Malloc2", 00:34:25.725 "nguid": "E756C70D3FD14AD594EBE398F3627155", 00:34:25.725 "nsid": 1, 00:34:25.725 "uuid": "e756c70d-3fd1-4ad5-94eb-e398f3627155" 00:34:25.725 } 00:34:25.725 ], 00:34:25.725 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:34:25.725 "serial_number": "SPDK2", 00:34:25.725 "subtype": "NVMe" 00:34:25.725 } 00:34:25.725 ] 00:34:25.725 08:30:58 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:34:25.725 08:30:58 -- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:34:25.725 08:30:58 -- target/nvmf_vfio_user.sh@34 -- # aerpid=69934 00:34:25.725 08:30:58 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:34:25.725 08:30:58 -- common/autotest_common.sh@1244 -- # local i=0 00:34:25.725 08:30:58 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:34:25.725 08:30:58 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:34:25.725 08:30:58 -- common/autotest_common.sh@1247 -- # i=1 00:34:25.725 08:30:58 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:34:25.725 08:30:59 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:34:25.725 08:30:59 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:34:25.725 08:30:59 -- common/autotest_common.sh@1247 -- # i=2 00:34:25.725 08:30:59 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:34:25.984 08:30:59 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:34:25.984 08:30:59 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:34:25.984 08:30:59 -- common/autotest_common.sh@1255 -- # return 0 00:34:25.984 08:30:59 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:34:25.984 08:30:59 -- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:34:26.243 Malloc4 00:34:26.243 08:30:59 -- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:34:26.243 08:30:59 -- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:34:26.503 Asynchronous Event Request test 00:34:26.504 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:34:26.504 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:34:26.504 Registering asynchronous event callbacks... 00:34:26.504 Starting namespace attribute notice tests for all controllers... 00:34:26.504 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:34:26.504 aer_cb - Changed Namespace 00:34:26.504 Cleaning up... 00:34:26.504 [ 00:34:26.504 { 00:34:26.504 "allow_any_host": true, 00:34:26.504 "hosts": [], 00:34:26.504 "listen_addresses": [], 00:34:26.504 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:26.504 "subtype": "Discovery" 00:34:26.504 }, 00:34:26.504 { 00:34:26.504 "allow_any_host": true, 00:34:26.504 "hosts": [], 00:34:26.504 "listen_addresses": [ 00:34:26.504 { 00:34:26.504 "adrfam": "IPv4", 00:34:26.504 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:34:26.504 "transport": "VFIOUSER", 00:34:26.504 "trsvcid": "0", 00:34:26.504 "trtype": "VFIOUSER" 00:34:26.504 } 00:34:26.504 ], 00:34:26.504 "max_cntlid": 65519, 00:34:26.504 "max_namespaces": 32, 00:34:26.504 "min_cntlid": 1, 00:34:26.504 "model_number": "SPDK bdev Controller", 00:34:26.504 "namespaces": [ 00:34:26.504 { 00:34:26.504 "bdev_name": "Malloc1", 00:34:26.504 "name": "Malloc1", 00:34:26.504 "nguid": "E959E8613921494B804DEE41C5FB4A7F", 00:34:26.504 "nsid": 1, 00:34:26.504 "uuid": "e959e861-3921-494b-804d-ee41c5fb4a7f" 00:34:26.504 }, 00:34:26.504 { 00:34:26.504 "bdev_name": "Malloc3", 00:34:26.504 "name": "Malloc3", 00:34:26.504 "nguid": "F050DFC520C7423FB7E77B8FE7C1B16B", 00:34:26.504 "nsid": 2, 00:34:26.504 "uuid": "f050dfc5-20c7-423f-b7e7-7b8fe7c1b16b" 00:34:26.504 } 00:34:26.504 ], 00:34:26.504 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:34:26.504 "serial_number": "SPDK1", 00:34:26.504 "subtype": "NVMe" 00:34:26.504 }, 00:34:26.504 { 00:34:26.504 "allow_any_host": true, 00:34:26.504 "hosts": [], 00:34:26.504 "listen_addresses": [ 00:34:26.504 { 00:34:26.504 "adrfam": "IPv4", 00:34:26.504 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:34:26.504 "transport": "VFIOUSER", 00:34:26.504 "trsvcid": "0", 00:34:26.504 "trtype": "VFIOUSER" 00:34:26.504 } 00:34:26.504 ], 00:34:26.504 "max_cntlid": 65519, 00:34:26.504 "max_namespaces": 32, 00:34:26.504 "min_cntlid": 1, 00:34:26.504 "model_number": "SPDK bdev Controller", 00:34:26.504 "namespaces": [ 00:34:26.504 { 00:34:26.504 "bdev_name": "Malloc2", 00:34:26.504 "name": "Malloc2", 00:34:26.504 "nguid": "E756C70D3FD14AD594EBE398F3627155", 00:34:26.504 "nsid": 1, 00:34:26.504 "uuid": "e756c70d-3fd1-4ad5-94eb-e398f3627155" 00:34:26.504 }, 00:34:26.504 { 00:34:26.504 "bdev_name": "Malloc4", 00:34:26.504 "name": "Malloc4", 00:34:26.504 "nguid": "9CFC3DD527F84ABD997C416BFE8EAB92", 00:34:26.504 "nsid": 2, 00:34:26.504 "uuid": "9cfc3dd5-27f8-4abd-997c-416bfe8eab92" 00:34:26.504 } 00:34:26.504 ], 00:34:26.504 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:34:26.504 "serial_number": "SPDK2", 00:34:26.504 "subtype": "NVMe" 00:34:26.504 } 00:34:26.504 ] 00:34:26.504 08:30:59 -- target/nvmf_vfio_user.sh@44 -- # wait 69934 00:34:26.504 08:30:59 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:34:26.504 08:30:59 -- target/nvmf_vfio_user.sh@95 -- # killprocess 69260 00:34:26.504 08:30:59 -- common/autotest_common.sh@926 -- # '[' -z 69260 ']' 00:34:26.504 08:30:59 -- common/autotest_common.sh@930 -- # kill -0 69260 00:34:26.504 08:30:59 -- common/autotest_common.sh@931 -- # uname 00:34:26.504 08:30:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:26.504 08:30:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69260 00:34:26.504 08:30:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:26.504 08:30:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:26.504 08:30:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69260' 00:34:26.504 killing process with pid 69260 00:34:26.504 08:30:59 -- common/autotest_common.sh@945 -- # kill 69260 00:34:26.504 [2024-04-17 08:30:59.804189] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:34:26.504 08:30:59 -- common/autotest_common.sh@950 -- # wait 69260 00:34:27.082 08:31:00 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:34:27.082 08:31:00 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:34:27.082 08:31:00 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:34:27.082 08:31:00 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:34:27.082 08:31:00 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:34:27.082 08:31:00 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=69982 00:34:27.082 08:31:00 -- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:34:27.082 Process pid: 69982 00:34:27.082 08:31:00 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 69982' 00:34:27.082 08:31:00 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:34:27.082 08:31:00 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 69982 00:34:27.082 08:31:00 -- common/autotest_common.sh@819 -- # '[' -z 69982 ']' 00:34:27.082 08:31:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:27.082 08:31:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:27.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:27.082 08:31:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:27.082 08:31:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:27.082 08:31:00 -- common/autotest_common.sh@10 -- # set +x 00:34:27.082 [2024-04-17 08:31:00.171738] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:27.082 [2024-04-17 08:31:00.172777] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:27.082 [2024-04-17 08:31:00.172839] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:27.082 [2024-04-17 08:31:00.312869] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:27.339 [2024-04-17 08:31:00.418226] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:34:27.339 [2024-04-17 08:31:00.418362] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:27.339 [2024-04-17 08:31:00.418371] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:27.339 [2024-04-17 08:31:00.418377] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:27.339 [2024-04-17 08:31:00.418605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:27.339 [2024-04-17 08:31:00.418875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:27.339 [2024-04-17 08:31:00.418744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:27.339 [2024-04-17 08:31:00.418900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:27.339 [2024-04-17 08:31:00.491077] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:34:27.339 [2024-04-17 08:31:00.498683] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:34:27.339 [2024-04-17 08:31:00.498943] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:34:27.339 [2024-04-17 08:31:00.499188] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:27.339 [2024-04-17 08:31:00.499324] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:34:28.057 08:31:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:28.057 08:31:01 -- common/autotest_common.sh@852 -- # return 0 00:34:28.057 08:31:01 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:34:28.990 08:31:02 -- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:34:29.248 08:31:02 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:34:29.248 08:31:02 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:34:29.248 08:31:02 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:34:29.248 08:31:02 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:34:29.248 08:31:02 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:34:29.536 Malloc1 00:34:29.536 08:31:02 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:34:29.801 08:31:02 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:34:29.801 08:31:03 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:34:30.059 08:31:03 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:34:30.059 08:31:03 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:34:30.059 08:31:03 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:34:30.318 Malloc2 00:34:30.318 08:31:03 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:34:30.576 08:31:03 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:34:30.834 08:31:03 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:34:30.834 08:31:04 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:34:30.834 08:31:04 -- target/nvmf_vfio_user.sh@95 -- # killprocess 69982 00:34:30.834 08:31:04 -- common/autotest_common.sh@926 -- # '[' -z 69982 ']' 00:34:30.834 08:31:04 -- common/autotest_common.sh@930 -- # kill -0 69982 00:34:30.834 08:31:04 -- common/autotest_common.sh@931 -- # uname 00:34:30.834 08:31:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:30.834 08:31:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69982 00:34:31.097 killing process with pid 69982 00:34:31.097 08:31:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:31.097 08:31:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:31.097 08:31:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69982' 00:34:31.097 08:31:04 -- common/autotest_common.sh@945 -- # kill 69982 00:34:31.097 08:31:04 -- common/autotest_common.sh@950 -- # wait 69982 00:34:31.367 08:31:04 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:34:31.367 08:31:04 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:34:31.367 00:34:31.367 real 0m54.017s 00:34:31.367 user 3m33.292s 00:34:31.367 sys 0m3.161s 00:34:31.367 08:31:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:31.367 08:31:04 -- common/autotest_common.sh@10 -- # set +x 00:34:31.367 ************************************ 00:34:31.367 END TEST nvmf_vfio_user 00:34:31.367 ************************************ 00:34:31.367 08:31:04 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user_nvme_compliance /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:34:31.367 08:31:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:34:31.367 08:31:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:31.367 08:31:04 -- common/autotest_common.sh@10 -- # set +x 00:34:31.367 ************************************ 00:34:31.367 START TEST nvmf_vfio_user_nvme_compliance 00:34:31.367 ************************************ 00:34:31.367 08:31:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:34:31.367 * Looking for test storage... 00:34:31.367 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/compliance 00:34:31.367 08:31:04 -- compliance/compliance.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:31.367 08:31:04 -- nvmf/common.sh@7 -- # uname -s 00:34:31.367 08:31:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:31.367 08:31:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:31.367 08:31:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:31.367 08:31:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:31.367 08:31:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:31.367 08:31:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:31.367 08:31:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:31.367 08:31:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:31.367 08:31:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:31.367 08:31:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:31.367 08:31:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:34:31.367 08:31:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:34:31.367 08:31:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:31.367 08:31:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:31.367 08:31:04 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:31.367 08:31:04 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:31.367 08:31:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:31.367 08:31:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:31.367 08:31:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:31.367 08:31:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.367 08:31:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.367 08:31:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.367 08:31:04 -- paths/export.sh@5 -- # export PATH 00:34:31.367 08:31:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.367 08:31:04 -- nvmf/common.sh@46 -- # : 0 00:34:31.367 08:31:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:34:31.367 08:31:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:34:31.367 08:31:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:34:31.367 08:31:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:31.367 08:31:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:31.367 08:31:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:34:31.367 08:31:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:34:31.367 08:31:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:34:31.367 08:31:04 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:31.367 08:31:04 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:31.367 08:31:04 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:34:31.367 08:31:04 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:34:31.367 08:31:04 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:34:31.367 08:31:04 -- compliance/compliance.sh@20 -- # nvmfpid=70169 00:34:31.367 Process pid: 70169 00:34:31.367 08:31:04 -- compliance/compliance.sh@21 -- # echo 'Process pid: 70169' 00:34:31.367 08:31:04 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:34:31.367 08:31:04 -- compliance/compliance.sh@24 -- # waitforlisten 70169 00:34:31.367 08:31:04 -- compliance/compliance.sh@19 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:34:31.367 08:31:04 -- common/autotest_common.sh@819 -- # '[' -z 70169 ']' 00:34:31.367 08:31:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:31.367 08:31:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:31.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:31.367 08:31:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:31.367 08:31:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:31.367 08:31:04 -- common/autotest_common.sh@10 -- # set +x 00:34:31.626 [2024-04-17 08:31:04.712909] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:31.626 [2024-04-17 08:31:04.712991] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:31.626 [2024-04-17 08:31:04.849209] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:31.626 [2024-04-17 08:31:04.950882] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:34:31.626 [2024-04-17 08:31:04.951026] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:31.626 [2024-04-17 08:31:04.951035] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:31.626 [2024-04-17 08:31:04.951042] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:31.626 [2024-04-17 08:31:04.951218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:31.626 [2024-04-17 08:31:04.951379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:31.626 [2024-04-17 08:31:04.951379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:32.559 08:31:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:32.559 08:31:05 -- common/autotest_common.sh@852 -- # return 0 00:34:32.559 08:31:05 -- compliance/compliance.sh@26 -- # sleep 1 00:34:33.496 08:31:06 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:34:33.496 08:31:06 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:34:33.496 08:31:06 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:34:33.496 08:31:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:33.496 08:31:06 -- common/autotest_common.sh@10 -- # set +x 00:34:33.496 08:31:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:33.496 08:31:06 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:34:33.496 08:31:06 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:34:33.496 08:31:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:33.496 08:31:06 -- common/autotest_common.sh@10 -- # set +x 00:34:33.496 malloc0 00:34:33.496 08:31:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:33.496 08:31:06 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:34:33.496 08:31:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:33.496 08:31:06 -- common/autotest_common.sh@10 -- # set +x 00:34:33.496 08:31:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:33.496 08:31:06 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:34:33.496 08:31:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:33.496 08:31:06 -- common/autotest_common.sh@10 -- # set +x 00:34:33.496 08:31:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:33.496 08:31:06 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:34:33.496 08:31:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:33.496 08:31:06 -- common/autotest_common.sh@10 -- # set +x 00:34:33.496 08:31:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:33.496 08:31:06 -- compliance/compliance.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:34:33.496 00:34:33.496 00:34:33.496 CUnit - A unit testing framework for C - Version 2.1-3 00:34:33.496 http://cunit.sourceforge.net/ 00:34:33.496 00:34:33.496 00:34:33.496 Suite: nvme_compliance 00:34:33.755 Test: admin_identify_ctrlr_verify_dptr ...[2024-04-17 08:31:06.873180] vfio_user.c: 789:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:34:33.755 [2024-04-17 08:31:06.873239] vfio_user.c:5484:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:34:33.755 [2024-04-17 08:31:06.873247] vfio_user.c:5576:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:34:33.755 passed 00:34:33.755 Test: admin_identify_ctrlr_verify_fused ...passed 00:34:34.038 Test: admin_identify_ns ...[2024-04-17 08:31:07.115423] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:34:34.038 [2024-04-17 08:31:07.123423] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:34:34.038 passed 00:34:34.038 Test: admin_get_features_mandatory_features ...passed 00:34:34.038 Test: admin_get_features_optional_features ...passed 00:34:34.318 Test: admin_set_features_number_of_queues ...passed 00:34:34.318 Test: admin_get_log_page_mandatory_logs ...passed 00:34:34.577 Test: admin_get_log_page_with_lpo ...[2024-04-17 08:31:07.754414] ctrlr.c:2546:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:34:34.577 passed 00:34:34.577 Test: fabric_property_get ...passed 00:34:34.834 Test: admin_delete_io_sq_use_admin_qid ...[2024-04-17 08:31:07.945469] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:34:34.834 passed 00:34:34.835 Test: admin_delete_io_sq_delete_sq_twice ...[2024-04-17 08:31:08.127416] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:34:34.835 [2024-04-17 08:31:08.143412] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:34:35.093 passed 00:34:35.093 Test: admin_delete_io_cq_use_admin_qid ...[2024-04-17 08:31:08.234929] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:34:35.093 passed 00:34:35.093 Test: admin_delete_io_cq_delete_cq_first ...[2024-04-17 08:31:08.399412] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:34:35.093 [2024-04-17 08:31:08.423422] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:34:35.352 passed 00:34:35.352 Test: admin_create_io_cq_verify_iv_pc ...[2024-04-17 08:31:08.515813] vfio_user.c:2150:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:34:35.352 [2024-04-17 08:31:08.515885] vfio_user.c:2144:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:34:35.352 passed 00:34:35.610 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-04-17 08:31:08.694415] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:34:35.610 [2024-04-17 08:31:08.702409] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:34:35.610 [2024-04-17 08:31:08.710411] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:34:35.610 [2024-04-17 08:31:08.718409] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:34:35.610 passed 00:34:35.610 Test: admin_create_io_sq_verify_pc ...[2024-04-17 08:31:08.850438] vfio_user.c:2044:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:34:35.610 passed 00:34:37.031 Test: admin_create_io_qp_max_qps ...[2024-04-17 08:31:10.055409] nvme_ctrlr.c:5304:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:34:37.290 passed 00:34:37.549 Test: admin_create_io_sq_shared_cq ...[2024-04-17 08:31:10.653412] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:34:37.549 passed 00:34:37.549 00:34:37.549 Run Summary: Type Total Ran Passed Failed Inactive 00:34:37.549 suites 1 1 n/a 0 0 00:34:37.549 tests 18 18 18 0 0 00:34:37.549 asserts 360 360 360 0 n/a 00:34:37.549 00:34:37.549 Elapsed time = 1.590 seconds 00:34:37.549 08:31:10 -- compliance/compliance.sh@42 -- # killprocess 70169 00:34:37.549 08:31:10 -- common/autotest_common.sh@926 -- # '[' -z 70169 ']' 00:34:37.549 08:31:10 -- common/autotest_common.sh@930 -- # kill -0 70169 00:34:37.549 08:31:10 -- common/autotest_common.sh@931 -- # uname 00:34:37.549 08:31:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:37.549 08:31:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70169 00:34:37.549 killing process with pid 70169 00:34:37.549 08:31:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:37.549 08:31:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:37.549 08:31:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70169' 00:34:37.549 08:31:10 -- common/autotest_common.sh@945 -- # kill 70169 00:34:37.549 08:31:10 -- common/autotest_common.sh@950 -- # wait 70169 00:34:37.809 08:31:11 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:34:37.809 08:31:11 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:34:37.809 00:34:37.809 real 0m6.509s 00:34:37.809 user 0m18.245s 00:34:37.809 sys 0m0.447s 00:34:37.809 08:31:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:37.809 08:31:11 -- common/autotest_common.sh@10 -- # set +x 00:34:37.809 ************************************ 00:34:37.809 END TEST nvmf_vfio_user_nvme_compliance 00:34:37.809 ************************************ 00:34:37.809 08:31:11 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:34:37.809 08:31:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:34:37.809 08:31:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:37.809 08:31:11 -- common/autotest_common.sh@10 -- # set +x 00:34:37.809 ************************************ 00:34:37.809 START TEST nvmf_vfio_user_fuzz 00:34:37.809 ************************************ 00:34:37.809 08:31:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:34:38.068 * Looking for test storage... 00:34:38.068 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:34:38.068 08:31:11 -- target/vfio_user_fuzz.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:38.068 08:31:11 -- nvmf/common.sh@7 -- # uname -s 00:34:38.069 08:31:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:38.069 08:31:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:38.069 08:31:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:38.069 08:31:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:38.069 08:31:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:38.069 08:31:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:38.069 08:31:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:38.069 08:31:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:38.069 08:31:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:38.069 08:31:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:38.069 08:31:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:34:38.069 08:31:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:34:38.069 08:31:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:38.069 08:31:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:38.069 08:31:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:38.069 08:31:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:38.069 08:31:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:38.069 08:31:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:38.069 08:31:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:38.069 08:31:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.069 08:31:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.069 08:31:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.069 08:31:11 -- paths/export.sh@5 -- # export PATH 00:34:38.069 08:31:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.069 08:31:11 -- nvmf/common.sh@46 -- # : 0 00:34:38.069 08:31:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:34:38.069 08:31:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:34:38.069 08:31:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:34:38.069 08:31:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:38.069 08:31:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:38.069 08:31:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:34:38.069 08:31:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:34:38.069 08:31:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:34:38.069 08:31:11 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:34:38.069 08:31:11 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:34:38.069 08:31:11 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:34:38.069 08:31:11 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:34:38.069 08:31:11 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:34:38.069 08:31:11 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:34:38.069 08:31:11 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:34:38.069 Process pid: 70321 00:34:38.069 08:31:11 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=70321 00:34:38.069 08:31:11 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 70321' 00:34:38.069 08:31:11 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:34:38.069 08:31:11 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 70321 00:34:38.069 08:31:11 -- target/vfio_user_fuzz.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:34:38.069 08:31:11 -- common/autotest_common.sh@819 -- # '[' -z 70321 ']' 00:34:38.069 08:31:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:38.069 08:31:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:38.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:38.069 08:31:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:38.069 08:31:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:38.069 08:31:11 -- common/autotest_common.sh@10 -- # set +x 00:34:39.006 08:31:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:39.006 08:31:12 -- common/autotest_common.sh@852 -- # return 0 00:34:39.006 08:31:12 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:34:39.940 08:31:13 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:34:39.940 08:31:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:39.940 08:31:13 -- common/autotest_common.sh@10 -- # set +x 00:34:39.940 08:31:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:39.940 08:31:13 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:34:39.940 08:31:13 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:34:39.940 08:31:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:39.940 08:31:13 -- common/autotest_common.sh@10 -- # set +x 00:34:39.940 malloc0 00:34:39.940 08:31:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:39.940 08:31:13 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:34:39.940 08:31:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:39.940 08:31:13 -- common/autotest_common.sh@10 -- # set +x 00:34:40.197 08:31:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:40.197 08:31:13 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:34:40.197 08:31:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:40.197 08:31:13 -- common/autotest_common.sh@10 -- # set +x 00:34:40.197 08:31:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:40.197 08:31:13 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:34:40.197 08:31:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:40.197 08:31:13 -- common/autotest_common.sh@10 -- # set +x 00:34:40.197 08:31:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:40.197 08:31:13 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:34:40.198 08:31:13 -- target/vfio_user_fuzz.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/vfio_user_fuzz -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:34:40.456 Shutting down the fuzz application 00:34:40.456 08:31:13 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:34:40.456 08:31:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:40.456 08:31:13 -- common/autotest_common.sh@10 -- # set +x 00:34:40.456 08:31:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:40.456 08:31:13 -- target/vfio_user_fuzz.sh@46 -- # killprocess 70321 00:34:40.456 08:31:13 -- common/autotest_common.sh@926 -- # '[' -z 70321 ']' 00:34:40.456 08:31:13 -- common/autotest_common.sh@930 -- # kill -0 70321 00:34:40.456 08:31:13 -- common/autotest_common.sh@931 -- # uname 00:34:40.456 08:31:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:40.456 08:31:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70321 00:34:40.456 08:31:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:40.456 08:31:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:40.456 killing process with pid 70321 00:34:40.456 08:31:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70321' 00:34:40.456 08:31:13 -- common/autotest_common.sh@945 -- # kill 70321 00:34:40.456 08:31:13 -- common/autotest_common.sh@950 -- # wait 70321 00:34:40.715 08:31:13 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_log.txt /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:34:40.715 08:31:13 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:34:40.715 00:34:40.715 real 0m2.912s 00:34:40.715 user 0m3.195s 00:34:40.715 sys 0m0.380s 00:34:40.715 08:31:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:40.715 08:31:13 -- common/autotest_common.sh@10 -- # set +x 00:34:40.715 ************************************ 00:34:40.715 END TEST nvmf_vfio_user_fuzz 00:34:40.715 ************************************ 00:34:40.715 08:31:14 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:34:40.715 08:31:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:34:40.715 08:31:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:40.715 08:31:14 -- common/autotest_common.sh@10 -- # set +x 00:34:40.715 ************************************ 00:34:40.715 START TEST nvmf_host_management 00:34:40.715 ************************************ 00:34:40.715 08:31:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:34:40.975 * Looking for test storage... 00:34:40.975 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:34:40.975 08:31:14 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:40.975 08:31:14 -- nvmf/common.sh@7 -- # uname -s 00:34:40.975 08:31:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:40.975 08:31:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:40.975 08:31:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:40.975 08:31:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:40.975 08:31:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:40.975 08:31:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:40.975 08:31:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:40.975 08:31:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:40.975 08:31:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:40.975 08:31:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:40.975 08:31:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:34:40.976 08:31:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:34:40.976 08:31:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:40.976 08:31:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:40.976 08:31:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:40.976 08:31:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:40.976 08:31:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:40.976 08:31:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:40.976 08:31:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:40.976 08:31:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.976 08:31:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.976 08:31:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.976 08:31:14 -- paths/export.sh@5 -- # export PATH 00:34:40.976 08:31:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.976 08:31:14 -- nvmf/common.sh@46 -- # : 0 00:34:40.976 08:31:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:34:40.976 08:31:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:34:40.976 08:31:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:34:40.976 08:31:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:40.976 08:31:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:40.976 08:31:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:34:40.976 08:31:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:34:40.976 08:31:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:34:40.976 08:31:14 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:40.976 08:31:14 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:40.976 08:31:14 -- target/host_management.sh@104 -- # nvmftestinit 00:34:40.976 08:31:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:34:40.976 08:31:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:40.976 08:31:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:34:40.976 08:31:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:34:40.976 08:31:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:34:40.976 08:31:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:40.976 08:31:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:40.976 08:31:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:40.976 08:31:14 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:34:40.976 08:31:14 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:34:40.976 08:31:14 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:34:40.976 08:31:14 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:34:40.976 08:31:14 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:34:40.976 08:31:14 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:34:40.976 08:31:14 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:40.976 08:31:14 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:40.976 08:31:14 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:34:40.976 08:31:14 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:34:40.976 08:31:14 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:40.976 08:31:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:40.976 08:31:14 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:40.976 08:31:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:40.976 08:31:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:40.976 08:31:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:40.976 08:31:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:40.976 08:31:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:40.976 08:31:14 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:34:40.976 08:31:14 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:34:40.976 Cannot find device "nvmf_tgt_br" 00:34:40.976 08:31:14 -- nvmf/common.sh@154 -- # true 00:34:40.976 08:31:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:34:40.976 Cannot find device "nvmf_tgt_br2" 00:34:40.976 08:31:14 -- nvmf/common.sh@155 -- # true 00:34:40.976 08:31:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:34:40.976 08:31:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:34:40.976 Cannot find device "nvmf_tgt_br" 00:34:40.976 08:31:14 -- nvmf/common.sh@157 -- # true 00:34:40.976 08:31:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:34:40.976 Cannot find device "nvmf_tgt_br2" 00:34:40.976 08:31:14 -- nvmf/common.sh@158 -- # true 00:34:40.976 08:31:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:34:40.976 08:31:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:34:40.976 08:31:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:40.976 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:40.976 08:31:14 -- nvmf/common.sh@161 -- # true 00:34:40.976 08:31:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:40.976 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:40.976 08:31:14 -- nvmf/common.sh@162 -- # true 00:34:40.976 08:31:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:34:40.976 08:31:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:40.976 08:31:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:40.976 08:31:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:40.976 08:31:14 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:41.235 08:31:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:41.235 08:31:14 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:41.235 08:31:14 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:34:41.235 08:31:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:34:41.235 08:31:14 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:34:41.235 08:31:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:34:41.235 08:31:14 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:34:41.235 08:31:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:34:41.235 08:31:14 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:41.235 08:31:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:41.235 08:31:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:41.235 08:31:14 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:34:41.235 08:31:14 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:34:41.235 08:31:14 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:34:41.235 08:31:14 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:41.235 08:31:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:41.235 08:31:14 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:41.235 08:31:14 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:41.235 08:31:14 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:34:41.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:41.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:34:41.235 00:34:41.235 --- 10.0.0.2 ping statistics --- 00:34:41.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:41.235 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:34:41.235 08:31:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:34:41.235 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:41.235 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:34:41.235 00:34:41.235 --- 10.0.0.3 ping statistics --- 00:34:41.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:41.235 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:34:41.235 08:31:14 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:41.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:41.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:34:41.235 00:34:41.235 --- 10.0.0.1 ping statistics --- 00:34:41.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:41.235 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:34:41.235 08:31:14 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:41.235 08:31:14 -- nvmf/common.sh@421 -- # return 0 00:34:41.235 08:31:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:34:41.235 08:31:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:41.235 08:31:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:34:41.235 08:31:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:34:41.235 08:31:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:41.235 08:31:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:34:41.235 08:31:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:34:41.235 08:31:14 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:34:41.235 08:31:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:41.235 08:31:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:41.235 08:31:14 -- common/autotest_common.sh@10 -- # set +x 00:34:41.235 ************************************ 00:34:41.235 START TEST nvmf_host_management 00:34:41.235 ************************************ 00:34:41.235 08:31:14 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:34:41.235 08:31:14 -- target/host_management.sh@69 -- # starttarget 00:34:41.235 08:31:14 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:34:41.235 08:31:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:34:41.235 08:31:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:41.235 08:31:14 -- common/autotest_common.sh@10 -- # set +x 00:34:41.235 08:31:14 -- nvmf/common.sh@469 -- # nvmfpid=70553 00:34:41.235 08:31:14 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:34:41.235 08:31:14 -- nvmf/common.sh@470 -- # waitforlisten 70553 00:34:41.235 08:31:14 -- common/autotest_common.sh@819 -- # '[' -z 70553 ']' 00:34:41.235 08:31:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:41.235 08:31:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:41.235 08:31:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:41.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:41.235 08:31:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:41.235 08:31:14 -- common/autotest_common.sh@10 -- # set +x 00:34:41.235 [2024-04-17 08:31:14.510057] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:41.235 [2024-04-17 08:31:14.510135] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:41.494 [2024-04-17 08:31:14.639516] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:41.494 [2024-04-17 08:31:14.748799] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:34:41.494 [2024-04-17 08:31:14.748971] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:41.494 [2024-04-17 08:31:14.748979] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:41.494 [2024-04-17 08:31:14.748985] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:41.494 [2024-04-17 08:31:14.749192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:41.494 [2024-04-17 08:31:14.749251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:41.494 [2024-04-17 08:31:14.749331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:41.494 [2024-04-17 08:31:14.749332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:34:42.432 08:31:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:42.432 08:31:15 -- common/autotest_common.sh@852 -- # return 0 00:34:42.432 08:31:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:34:42.432 08:31:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:42.432 08:31:15 -- common/autotest_common.sh@10 -- # set +x 00:34:42.432 08:31:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:42.432 08:31:15 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:42.432 08:31:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:42.432 08:31:15 -- common/autotest_common.sh@10 -- # set +x 00:34:42.432 [2024-04-17 08:31:15.485773] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:42.432 08:31:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:42.432 08:31:15 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:34:42.432 08:31:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:42.432 08:31:15 -- common/autotest_common.sh@10 -- # set +x 00:34:42.432 08:31:15 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:34:42.432 08:31:15 -- target/host_management.sh@23 -- # cat 00:34:42.432 08:31:15 -- target/host_management.sh@30 -- # rpc_cmd 00:34:42.432 08:31:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:42.432 08:31:15 -- common/autotest_common.sh@10 -- # set +x 00:34:42.432 Malloc0 00:34:42.432 [2024-04-17 08:31:15.566079] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:42.432 08:31:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:42.432 08:31:15 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:34:42.432 08:31:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:42.432 08:31:15 -- common/autotest_common.sh@10 -- # set +x 00:34:42.432 08:31:15 -- target/host_management.sh@73 -- # perfpid=70625 00:34:42.432 08:31:15 -- target/host_management.sh@74 -- # waitforlisten 70625 /var/tmp/bdevperf.sock 00:34:42.432 08:31:15 -- common/autotest_common.sh@819 -- # '[' -z 70625 ']' 00:34:42.432 08:31:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:42.432 08:31:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:42.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:42.432 08:31:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:42.432 08:31:15 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:34:42.432 08:31:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:42.432 08:31:15 -- common/autotest_common.sh@10 -- # set +x 00:34:42.432 08:31:15 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:34:42.432 08:31:15 -- nvmf/common.sh@520 -- # config=() 00:34:42.432 08:31:15 -- nvmf/common.sh@520 -- # local subsystem config 00:34:42.432 08:31:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:42.432 08:31:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:42.432 { 00:34:42.432 "params": { 00:34:42.432 "name": "Nvme$subsystem", 00:34:42.432 "trtype": "$TEST_TRANSPORT", 00:34:42.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:42.432 "adrfam": "ipv4", 00:34:42.432 "trsvcid": "$NVMF_PORT", 00:34:42.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:42.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:42.432 "hdgst": ${hdgst:-false}, 00:34:42.432 "ddgst": ${ddgst:-false} 00:34:42.432 }, 00:34:42.432 "method": "bdev_nvme_attach_controller" 00:34:42.432 } 00:34:42.432 EOF 00:34:42.432 )") 00:34:42.432 08:31:15 -- nvmf/common.sh@542 -- # cat 00:34:42.432 08:31:15 -- nvmf/common.sh@544 -- # jq . 00:34:42.432 08:31:15 -- nvmf/common.sh@545 -- # IFS=, 00:34:42.432 08:31:15 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:42.432 "params": { 00:34:42.432 "name": "Nvme0", 00:34:42.432 "trtype": "tcp", 00:34:42.432 "traddr": "10.0.0.2", 00:34:42.432 "adrfam": "ipv4", 00:34:42.432 "trsvcid": "4420", 00:34:42.432 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:42.432 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:42.432 "hdgst": false, 00:34:42.432 "ddgst": false 00:34:42.432 }, 00:34:42.432 "method": "bdev_nvme_attach_controller" 00:34:42.432 }' 00:34:42.432 [2024-04-17 08:31:15.666602] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:42.432 [2024-04-17 08:31:15.666691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70625 ] 00:34:42.694 [2024-04-17 08:31:15.813717] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:42.694 [2024-04-17 08:31:15.920972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:42.954 Running I/O for 10 seconds... 00:34:43.526 08:31:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:43.526 08:31:16 -- common/autotest_common.sh@852 -- # return 0 00:34:43.526 08:31:16 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:34:43.526 08:31:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:43.526 08:31:16 -- common/autotest_common.sh@10 -- # set +x 00:34:43.526 08:31:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:43.526 08:31:16 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:43.526 08:31:16 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:34:43.526 08:31:16 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:34:43.526 08:31:16 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:34:43.526 08:31:16 -- target/host_management.sh@52 -- # local ret=1 00:34:43.526 08:31:16 -- target/host_management.sh@53 -- # local i 00:34:43.526 08:31:16 -- target/host_management.sh@54 -- # (( i = 10 )) 00:34:43.526 08:31:16 -- target/host_management.sh@54 -- # (( i != 0 )) 00:34:43.526 08:31:16 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:34:43.526 08:31:16 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:34:43.526 08:31:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:43.526 08:31:16 -- common/autotest_common.sh@10 -- # set +x 00:34:43.526 08:31:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:43.526 08:31:16 -- target/host_management.sh@55 -- # read_io_count=2138 00:34:43.526 08:31:16 -- target/host_management.sh@58 -- # '[' 2138 -ge 100 ']' 00:34:43.526 08:31:16 -- target/host_management.sh@59 -- # ret=0 00:34:43.526 08:31:16 -- target/host_management.sh@60 -- # break 00:34:43.526 08:31:16 -- target/host_management.sh@64 -- # return 0 00:34:43.526 08:31:16 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:34:43.526 08:31:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:43.526 08:31:16 -- common/autotest_common.sh@10 -- # set +x 00:34:43.526 [2024-04-17 08:31:16.749385] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.526 [2024-04-17 08:31:16.749797] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.526 [2024-04-17 08:31:16.749891] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.526 [2024-04-17 08:31:16.749937] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.526 [2024-04-17 08:31:16.749976] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.526 [2024-04-17 08:31:16.750010] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.526 [2024-04-17 08:31:16.750049] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.526 [2024-04-17 08:31:16.750087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.526 [2024-04-17 08:31:16.750120] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.526 [2024-04-17 08:31:16.750157] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.526 [2024-04-17 08:31:16.750199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.526 [2024-04-17 08:31:16.750237] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.526 [2024-04-17 08:31:16.750275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.526 [2024-04-17 08:31:16.750312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.526 [2024-04-17 08:31:16.750349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.526 [2024-04-17 08:31:16.750381] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.526 [2024-04-17 08:31:16.750448] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.526 [2024-04-17 08:31:16.750488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.526 [2024-04-17 08:31:16.750526] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.526 [2024-04-17 08:31:16.750559] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.526 [2024-04-17 08:31:16.750596] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.526 [2024-04-17 08:31:16.750633] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.526 [2024-04-17 08:31:16.750680] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.526 [2024-04-17 08:31:16.750718] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.526 [2024-04-17 08:31:16.750755] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.526 [2024-04-17 08:31:16.750792] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.526 [2024-04-17 08:31:16.750829] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.526 [2024-04-17 08:31:16.750866] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.526 [2024-04-17 08:31:16.750903] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.526 [2024-04-17 08:31:16.750941] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.526 [2024-04-17 08:31:16.750977] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.526 [2024-04-17 08:31:16.751015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.526 [2024-04-17 08:31:16.751055] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.526 [2024-04-17 08:31:16.751093] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.526 [2024-04-17 08:31:16.751130] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.526 [2024-04-17 08:31:16.751168] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.526 [2024-04-17 08:31:16.751205] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.527 [2024-04-17 08:31:16.751242] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.527 [2024-04-17 08:31:16.751283] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.527 [2024-04-17 08:31:16.751323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.527 [2024-04-17 08:31:16.751364] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.527 [2024-04-17 08:31:16.751405] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.527 [2024-04-17 08:31:16.751445] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.527 [2024-04-17 08:31:16.751485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.527 [2024-04-17 08:31:16.751522] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.527 [2024-04-17 08:31:16.751560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.527 [2024-04-17 08:31:16.751592] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.527 [2024-04-17 08:31:16.751628] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.527 [2024-04-17 08:31:16.751665] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.527 [2024-04-17 08:31:16.751702] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.527 [2024-04-17 08:31:16.751739] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.527 [2024-04-17 08:31:16.751776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.527 [2024-04-17 08:31:16.751817] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.527 [2024-04-17 08:31:16.751849] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.527 [2024-04-17 08:31:16.751881] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.527 [2024-04-17 08:31:16.751917] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640b0 is same with the state(5) to be set 00:34:43.527 [2024-04-17 08:31:16.752208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.527 [2024-04-17 08:31:16.752254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.527 [2024-04-17 08:31:16.752273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.527 [2024-04-17 08:31:16.752281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.527 [2024-04-17 08:31:16.752290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.527 [2024-04-17 08:31:16.752298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.527 [2024-04-17 08:31:16.752306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.527 [2024-04-17 08:31:16.752313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.527 [2024-04-17 08:31:16.752322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.527 [2024-04-17 08:31:16.752330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.527 [2024-04-17 08:31:16.752342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.527 [2024-04-17 08:31:16.752352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.527 [2024-04-17 08:31:16.752362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.527 [2024-04-17 08:31:16.752371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.527 [2024-04-17 08:31:16.752382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.527 [2024-04-17 08:31:16.752389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.527 [2024-04-17 08:31:16.752414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.527 [2024-04-17 08:31:16.752421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.527 [2024-04-17 08:31:16.752430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.527 [2024-04-17 08:31:16.752437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.527 [2024-04-17 08:31:16.752449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.527 [2024-04-17 08:31:16.752458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.527 [2024-04-17 08:31:16.752468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.527 [2024-04-17 08:31:16.752475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.527 [2024-04-17 08:31:16.752487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.527 [2024-04-17 08:31:16.752498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.527 [2024-04-17 08:31:16.752510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.527 [2024-04-17 08:31:16.752517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.527 [2024-04-17 08:31:16.752533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.527 [2024-04-17 08:31:16.752541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.527 [2024-04-17 08:31:16.752550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.527 [2024-04-17 08:31:16.752556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.527 [2024-04-17 08:31:16.752568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.527 [2024-04-17 08:31:16.752576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.527 [2024-04-17 08:31:16.752585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.527 [2024-04-17 08:31:16.752591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.527 [2024-04-17 08:31:16.752602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.527 [2024-04-17 08:31:16.752611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.527 [2024-04-17 08:31:16.752619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.527 [2024-04-17 08:31:16.752629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.527 [2024-04-17 08:31:16.752640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.527 [2024-04-17 08:31:16.752650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.527 [2024-04-17 08:31:16.752659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.527 [2024-04-17 08:31:16.752666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.527 [2024-04-17 08:31:16.752675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.527 [2024-04-17 08:31:16.752682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.527 [2024-04-17 08:31:16.752694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.527 [2024-04-17 08:31:16.752702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.527 [2024-04-17 08:31:16.752710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.527 [2024-04-17 08:31:16.752719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.527 [2024-04-17 08:31:16.752731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.527 [2024-04-17 08:31:16.752739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.527 [2024-04-17 08:31:16.752747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.527 [2024-04-17 08:31:16.752754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.527 [2024-04-17 08:31:16.752764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.527 [2024-04-17 08:31:16.752774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.528 [2024-04-17 08:31:16.752783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.528 [2024-04-17 08:31:16.752790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.528 [2024-04-17 08:31:16.752799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.528 [2024-04-17 08:31:16.752809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.528 [2024-04-17 08:31:16.752821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.528 [2024-04-17 08:31:16.752829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.528 [2024-04-17 08:31:16.752842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.528 [2024-04-17 08:31:16.752851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.528 [2024-04-17 08:31:16.752860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.528 [2024-04-17 08:31:16.752867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.528 [2024-04-17 08:31:16.752875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.528 [2024-04-17 08:31:16.752885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.528 [2024-04-17 08:31:16.752897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.528 [2024-04-17 08:31:16.752905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.528 [2024-04-17 08:31:16.752914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.528 [2024-04-17 08:31:16.752923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.528 [2024-04-17 08:31:16.752934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.528 [2024-04-17 08:31:16.752941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.528 [2024-04-17 08:31:16.752953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.528 [2024-04-17 08:31:16.752962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.528 [2024-04-17 08:31:16.752973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.528 [2024-04-17 08:31:16.752981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.528 [2024-04-17 08:31:16.752992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.528 [2024-04-17 08:31:16.752999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.528 [2024-04-17 08:31:16.753011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.528 [2024-04-17 08:31:16.753020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.528 [2024-04-17 08:31:16.753029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.528 [2024-04-17 08:31:16.753036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.528 [2024-04-17 08:31:16.753047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.528 [2024-04-17 08:31:16.753056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.528 [2024-04-17 08:31:16.753065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.528 [2024-04-17 08:31:16.753074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.528 [2024-04-17 08:31:16.753087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.528 [2024-04-17 08:31:16.753094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.528 [2024-04-17 08:31:16.753103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.528 [2024-04-17 08:31:16.753112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.528 [2024-04-17 08:31:16.753126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.528 [2024-04-17 08:31:16.753136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.528 [2024-04-17 08:31:16.753147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.528 [2024-04-17 08:31:16.753154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.528 [2024-04-17 08:31:16.753162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.528 [2024-04-17 08:31:16.753169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.528 [2024-04-17 08:31:16.753180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.528 [2024-04-17 08:31:16.753189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.528 [2024-04-17 08:31:16.753198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.528 [2024-04-17 08:31:16.753207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.528 [2024-04-17 08:31:16.753219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.528 [2024-04-17 08:31:16.753227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.528 [2024-04-17 08:31:16.753237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.528 [2024-04-17 08:31:16.753246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.528 [2024-04-17 08:31:16.753256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.528 [2024-04-17 08:31:16.753267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.528 [2024-04-17 08:31:16.753276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.528 [2024-04-17 08:31:16.753283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.528 [2024-04-17 08:31:16.753292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.528 [2024-04-17 08:31:16.753301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.528 [2024-04-17 08:31:16.753313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.528 [2024-04-17 08:31:16.753320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.528 [2024-04-17 08:31:16.753333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.528 [2024-04-17 08:31:16.753343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.528 [2024-04-17 08:31:16.753352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.528 [2024-04-17 08:31:16.753360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.528 [2024-04-17 08:31:16.753371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.528 [2024-04-17 08:31:16.753379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.528 [2024-04-17 08:31:16.753391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.528 [2024-04-17 08:31:16.753411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.528 [2024-04-17 08:31:16.753421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.529 [2024-04-17 08:31:16.753428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.529 [2024-04-17 08:31:16.753438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.529 [2024-04-17 08:31:16.753447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.529 [2024-04-17 08:31:16.753459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:43.529 [2024-04-17 08:31:16.753467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.529 [2024-04-17 08:31:16.753556] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x62b7d0 was disconnected and freed. reset controller. 00:34:43.529 08:31:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:43.529 08:31:16 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:34:43.529 08:31:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:43.529 08:31:16 -- common/autotest_common.sh@10 -- # set +x 00:34:43.529 [2024-04-17 08:31:16.754704] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:43.529 task offset: 37120 on job bdev=Nvme0n1 fails 00:34:43.529 00:34:43.529 Latency(us) 00:34:43.529 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:43.529 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:43.529 Job: Nvme0n1 ended in about 0.68 seconds with error 00:34:43.529 Verification LBA range: start 0x0 length 0x400 00:34:43.529 Nvme0n1 : 0.68 3407.89 212.99 93.81 0.00 18003.97 1738.56 29763.07 00:34:43.529 =================================================================================================================== 00:34:43.529 Total : 3407.89 212.99 93.81 0.00 18003.97 1738.56 29763.07 00:34:43.529 [2024-04-17 08:31:16.757045] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:43.529 [2024-04-17 08:31:16.757079] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x62b170 (9): Bad file descriptor 00:34:43.529 [2024-04-17 08:31:16.760440] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:34:43.529 [2024-04-17 08:31:16.760535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:34:43.529 [2024-04-17 08:31:16.760561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:43.529 [2024-04-17 08:31:16.760578] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:34:43.529 [2024-04-17 08:31:16.760589] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:34:43.529 [2024-04-17 08:31:16.760598] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:43.529 [2024-04-17 08:31:16.760607] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x62b170 00:34:43.529 [2024-04-17 08:31:16.760641] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x62b170 (9): Bad file descriptor 00:34:43.529 [2024-04-17 08:31:16.760655] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:43.529 [2024-04-17 08:31:16.760663] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:43.529 [2024-04-17 08:31:16.760675] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:43.529 [2024-04-17 08:31:16.760693] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:43.529 08:31:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:43.529 08:31:16 -- target/host_management.sh@87 -- # sleep 1 00:34:44.468 08:31:17 -- target/host_management.sh@91 -- # kill -9 70625 00:34:44.468 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (70625) - No such process 00:34:44.468 08:31:17 -- target/host_management.sh@91 -- # true 00:34:44.468 08:31:17 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:34:44.468 08:31:17 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:34:44.468 08:31:17 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:34:44.468 08:31:17 -- nvmf/common.sh@520 -- # config=() 00:34:44.468 08:31:17 -- nvmf/common.sh@520 -- # local subsystem config 00:34:44.468 08:31:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:44.468 08:31:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:44.468 { 00:34:44.468 "params": { 00:34:44.468 "name": "Nvme$subsystem", 00:34:44.468 "trtype": "$TEST_TRANSPORT", 00:34:44.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:44.468 "adrfam": "ipv4", 00:34:44.468 "trsvcid": "$NVMF_PORT", 00:34:44.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:44.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:44.468 "hdgst": ${hdgst:-false}, 00:34:44.468 "ddgst": ${ddgst:-false} 00:34:44.468 }, 00:34:44.468 "method": "bdev_nvme_attach_controller" 00:34:44.468 } 00:34:44.468 EOF 00:34:44.468 )") 00:34:44.468 08:31:17 -- nvmf/common.sh@542 -- # cat 00:34:44.468 08:31:17 -- nvmf/common.sh@544 -- # jq . 00:34:44.468 08:31:17 -- nvmf/common.sh@545 -- # IFS=, 00:34:44.468 08:31:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:44.468 "params": { 00:34:44.468 "name": "Nvme0", 00:34:44.468 "trtype": "tcp", 00:34:44.468 "traddr": "10.0.0.2", 00:34:44.468 "adrfam": "ipv4", 00:34:44.468 "trsvcid": "4420", 00:34:44.468 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:44.468 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:44.468 "hdgst": false, 00:34:44.468 "ddgst": false 00:34:44.468 }, 00:34:44.468 "method": "bdev_nvme_attach_controller" 00:34:44.468 }' 00:34:44.727 [2024-04-17 08:31:17.814579] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:44.727 [2024-04-17 08:31:17.814658] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70675 ] 00:34:44.727 [2024-04-17 08:31:17.939622] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:44.727 [2024-04-17 08:31:18.047888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:44.987 Running I/O for 1 seconds... 00:34:45.927 00:34:45.927 Latency(us) 00:34:45.927 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:45.927 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:45.927 Verification LBA range: start 0x0 length 0x400 00:34:45.927 Nvme0n1 : 1.01 3671.36 229.46 0.00 0.00 17137.88 1244.90 23695.99 00:34:45.927 =================================================================================================================== 00:34:45.927 Total : 3671.36 229.46 0.00 0.00 17137.88 1244.90 23695.99 00:34:46.186 08:31:19 -- target/host_management.sh@101 -- # stoptarget 00:34:46.186 08:31:19 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:34:46.186 08:31:19 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:34:46.186 08:31:19 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:34:46.186 08:31:19 -- target/host_management.sh@40 -- # nvmftestfini 00:34:46.186 08:31:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:34:46.186 08:31:19 -- nvmf/common.sh@116 -- # sync 00:34:46.445 08:31:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:34:46.445 08:31:19 -- nvmf/common.sh@119 -- # set +e 00:34:46.445 08:31:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:34:46.445 08:31:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:34:46.445 rmmod nvme_tcp 00:34:46.445 rmmod nvme_fabrics 00:34:46.445 rmmod nvme_keyring 00:34:46.445 08:31:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:34:46.445 08:31:19 -- nvmf/common.sh@123 -- # set -e 00:34:46.445 08:31:19 -- nvmf/common.sh@124 -- # return 0 00:34:46.445 08:31:19 -- nvmf/common.sh@477 -- # '[' -n 70553 ']' 00:34:46.445 08:31:19 -- nvmf/common.sh@478 -- # killprocess 70553 00:34:46.445 08:31:19 -- common/autotest_common.sh@926 -- # '[' -z 70553 ']' 00:34:46.445 08:31:19 -- common/autotest_common.sh@930 -- # kill -0 70553 00:34:46.445 08:31:19 -- common/autotest_common.sh@931 -- # uname 00:34:46.445 08:31:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:46.445 08:31:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70553 00:34:46.445 08:31:19 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:34:46.445 08:31:19 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:34:46.445 08:31:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70553' 00:34:46.445 killing process with pid 70553 00:34:46.445 08:31:19 -- common/autotest_common.sh@945 -- # kill 70553 00:34:46.445 08:31:19 -- common/autotest_common.sh@950 -- # wait 70553 00:34:47.013 [2024-04-17 08:31:20.081016] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:34:47.013 08:31:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:34:47.013 08:31:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:34:47.013 08:31:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:34:47.013 08:31:20 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:47.013 08:31:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:34:47.013 08:31:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:47.013 08:31:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:47.013 08:31:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:47.013 08:31:20 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:34:47.013 00:34:47.013 real 0m5.707s 00:34:47.013 user 0m23.638s 00:34:47.013 sys 0m1.178s 00:34:47.013 08:31:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:47.013 08:31:20 -- common/autotest_common.sh@10 -- # set +x 00:34:47.013 ************************************ 00:34:47.013 END TEST nvmf_host_management 00:34:47.013 ************************************ 00:34:47.013 08:31:20 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:34:47.013 00:34:47.013 real 0m6.181s 00:34:47.013 user 0m23.748s 00:34:47.013 sys 0m1.421s 00:34:47.013 08:31:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:47.013 08:31:20 -- common/autotest_common.sh@10 -- # set +x 00:34:47.013 ************************************ 00:34:47.013 END TEST nvmf_host_management 00:34:47.013 ************************************ 00:34:47.013 08:31:20 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:34:47.013 08:31:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:34:47.013 08:31:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:47.013 08:31:20 -- common/autotest_common.sh@10 -- # set +x 00:34:47.013 ************************************ 00:34:47.013 START TEST nvmf_lvol 00:34:47.013 ************************************ 00:34:47.013 08:31:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:34:47.273 * Looking for test storage... 00:34:47.273 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:34:47.273 08:31:20 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:47.273 08:31:20 -- nvmf/common.sh@7 -- # uname -s 00:34:47.273 08:31:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:47.273 08:31:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:47.273 08:31:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:47.273 08:31:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:47.273 08:31:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:47.273 08:31:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:47.273 08:31:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:47.273 08:31:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:47.273 08:31:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:47.273 08:31:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:47.273 08:31:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:34:47.273 08:31:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:34:47.273 08:31:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:47.273 08:31:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:47.273 08:31:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:47.273 08:31:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:47.273 08:31:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:47.273 08:31:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:47.273 08:31:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:47.273 08:31:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.273 08:31:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.274 08:31:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.274 08:31:20 -- paths/export.sh@5 -- # export PATH 00:34:47.274 08:31:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.274 08:31:20 -- nvmf/common.sh@46 -- # : 0 00:34:47.274 08:31:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:34:47.274 08:31:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:34:47.274 08:31:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:34:47.274 08:31:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:47.274 08:31:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:47.274 08:31:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:34:47.274 08:31:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:34:47.274 08:31:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:34:47.274 08:31:20 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:47.274 08:31:20 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:47.274 08:31:20 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:34:47.274 08:31:20 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:34:47.274 08:31:20 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:47.274 08:31:20 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:34:47.274 08:31:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:34:47.274 08:31:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:47.274 08:31:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:34:47.274 08:31:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:34:47.274 08:31:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:34:47.274 08:31:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:47.274 08:31:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:47.274 08:31:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:47.274 08:31:20 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:34:47.274 08:31:20 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:34:47.274 08:31:20 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:34:47.274 08:31:20 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:34:47.274 08:31:20 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:34:47.274 08:31:20 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:34:47.274 08:31:20 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:47.274 08:31:20 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:47.274 08:31:20 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:34:47.274 08:31:20 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:34:47.274 08:31:20 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:47.274 08:31:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:47.274 08:31:20 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:47.274 08:31:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:47.274 08:31:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:47.274 08:31:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:47.274 08:31:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:47.274 08:31:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:47.274 08:31:20 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:34:47.274 08:31:20 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:34:47.274 Cannot find device "nvmf_tgt_br" 00:34:47.274 08:31:20 -- nvmf/common.sh@154 -- # true 00:34:47.274 08:31:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:34:47.274 Cannot find device "nvmf_tgt_br2" 00:34:47.274 08:31:20 -- nvmf/common.sh@155 -- # true 00:34:47.274 08:31:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:34:47.274 08:31:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:34:47.274 Cannot find device "nvmf_tgt_br" 00:34:47.274 08:31:20 -- nvmf/common.sh@157 -- # true 00:34:47.274 08:31:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:34:47.274 Cannot find device "nvmf_tgt_br2" 00:34:47.274 08:31:20 -- nvmf/common.sh@158 -- # true 00:34:47.274 08:31:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:34:47.274 08:31:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:34:47.274 08:31:20 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:47.274 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:47.274 08:31:20 -- nvmf/common.sh@161 -- # true 00:34:47.274 08:31:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:47.274 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:47.274 08:31:20 -- nvmf/common.sh@162 -- # true 00:34:47.274 08:31:20 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:34:47.274 08:31:20 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:47.274 08:31:20 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:47.274 08:31:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:47.534 08:31:20 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:47.534 08:31:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:47.534 08:31:20 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:47.534 08:31:20 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:34:47.534 08:31:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:34:47.534 08:31:20 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:34:47.534 08:31:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:34:47.534 08:31:20 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:34:47.534 08:31:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:34:47.534 08:31:20 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:47.534 08:31:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:47.534 08:31:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:47.534 08:31:20 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:34:47.534 08:31:20 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:34:47.534 08:31:20 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:34:47.534 08:31:20 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:47.534 08:31:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:47.534 08:31:20 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:47.534 08:31:20 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:47.534 08:31:20 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:34:47.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:47.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:34:47.534 00:34:47.534 --- 10.0.0.2 ping statistics --- 00:34:47.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:47.534 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:34:47.534 08:31:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:34:47.534 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:47.534 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:34:47.534 00:34:47.534 --- 10.0.0.3 ping statistics --- 00:34:47.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:47.534 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:34:47.534 08:31:20 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:47.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:47.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.016 ms 00:34:47.534 00:34:47.534 --- 10.0.0.1 ping statistics --- 00:34:47.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:47.534 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 00:34:47.534 08:31:20 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:47.534 08:31:20 -- nvmf/common.sh@421 -- # return 0 00:34:47.534 08:31:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:34:47.534 08:31:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:47.534 08:31:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:34:47.534 08:31:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:34:47.534 08:31:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:47.534 08:31:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:34:47.534 08:31:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:34:47.534 08:31:20 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:34:47.534 08:31:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:34:47.534 08:31:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:47.534 08:31:20 -- common/autotest_common.sh@10 -- # set +x 00:34:47.534 08:31:20 -- nvmf/common.sh@469 -- # nvmfpid=70892 00:34:47.534 08:31:20 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:34:47.534 08:31:20 -- nvmf/common.sh@470 -- # waitforlisten 70892 00:34:47.534 08:31:20 -- common/autotest_common.sh@819 -- # '[' -z 70892 ']' 00:34:47.534 08:31:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:47.534 08:31:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:47.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:47.534 08:31:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:47.534 08:31:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:47.534 08:31:20 -- common/autotest_common.sh@10 -- # set +x 00:34:47.534 [2024-04-17 08:31:20.797954] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:47.534 [2024-04-17 08:31:20.798025] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:47.793 [2024-04-17 08:31:20.942358] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:47.793 [2024-04-17 08:31:21.048127] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:34:47.793 [2024-04-17 08:31:21.048284] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:47.793 [2024-04-17 08:31:21.048298] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:47.793 [2024-04-17 08:31:21.048306] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:47.793 [2024-04-17 08:31:21.048638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:47.793 [2024-04-17 08:31:21.048721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:47.793 [2024-04-17 08:31:21.048722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:48.361 08:31:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:48.361 08:31:21 -- common/autotest_common.sh@852 -- # return 0 00:34:48.361 08:31:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:34:48.361 08:31:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:48.361 08:31:21 -- common/autotest_common.sh@10 -- # set +x 00:34:48.621 08:31:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:48.621 08:31:21 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:48.621 [2024-04-17 08:31:21.924067] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:48.621 08:31:21 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:48.880 08:31:22 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:34:49.140 08:31:22 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:49.140 08:31:22 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:34:49.140 08:31:22 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:34:49.399 08:31:22 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:34:49.658 08:31:22 -- target/nvmf_lvol.sh@29 -- # lvs=4bcd6839-f39d-4fee-8f6a-ed03be0a370a 00:34:49.658 08:31:22 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4bcd6839-f39d-4fee-8f6a-ed03be0a370a lvol 20 00:34:49.916 08:31:23 -- target/nvmf_lvol.sh@32 -- # lvol=c40d95dd-5c2c-4677-826f-4de0d5c0af7e 00:34:49.916 08:31:23 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:50.204 08:31:23 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c40d95dd-5c2c-4677-826f-4de0d5c0af7e 00:34:50.465 08:31:23 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:50.807 [2024-04-17 08:31:23.804335] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:50.807 08:31:23 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:50.807 08:31:24 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:34:50.807 08:31:24 -- target/nvmf_lvol.sh@42 -- # perf_pid=71043 00:34:50.807 08:31:24 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:34:51.742 08:31:25 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot c40d95dd-5c2c-4677-826f-4de0d5c0af7e MY_SNAPSHOT 00:34:52.310 08:31:25 -- target/nvmf_lvol.sh@47 -- # snapshot=d480c3f2-d0a6-42c3-9fc5-6c66e8dedc70 00:34:52.310 08:31:25 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize c40d95dd-5c2c-4677-826f-4de0d5c0af7e 30 00:34:52.568 08:31:25 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone d480c3f2-d0a6-42c3-9fc5-6c66e8dedc70 MY_CLONE 00:34:52.826 08:31:26 -- target/nvmf_lvol.sh@49 -- # clone=a1b43871-f3c9-4c3e-91a6-7c1659057f4d 00:34:52.826 08:31:26 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate a1b43871-f3c9-4c3e-91a6-7c1659057f4d 00:34:54.202 08:31:27 -- target/nvmf_lvol.sh@53 -- # wait 71043 00:35:02.322 Initializing NVMe Controllers 00:35:02.322 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:35:02.322 Controller IO queue size 128, less than required. 00:35:02.322 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:02.322 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:35:02.322 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:35:02.322 Initialization complete. Launching workers. 00:35:02.322 ======================================================== 00:35:02.322 Latency(us) 00:35:02.322 Device Information : IOPS MiB/s Average min max 00:35:02.322 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 5616.90 21.94 22803.17 2233.32 79469.72 00:35:02.322 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 7691.70 30.05 16640.75 3275.39 194612.99 00:35:02.322 ======================================================== 00:35:02.322 Total : 13308.60 51.99 19241.60 2233.32 194612.99 00:35:02.322 00:35:02.322 08:31:34 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:02.322 08:31:34 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete c40d95dd-5c2c-4677-826f-4de0d5c0af7e 00:35:02.322 08:31:34 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4bcd6839-f39d-4fee-8f6a-ed03be0a370a 00:35:02.322 08:31:35 -- target/nvmf_lvol.sh@60 -- # rm -f 00:35:02.322 08:31:35 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:35:02.322 08:31:35 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:35:02.322 08:31:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:35:02.322 08:31:35 -- nvmf/common.sh@116 -- # sync 00:35:02.322 08:31:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:35:02.322 08:31:35 -- nvmf/common.sh@119 -- # set +e 00:35:02.322 08:31:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:35:02.322 08:31:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:35:02.322 rmmod nvme_tcp 00:35:02.322 rmmod nvme_fabrics 00:35:02.322 rmmod nvme_keyring 00:35:02.322 08:31:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:35:02.322 08:31:35 -- nvmf/common.sh@123 -- # set -e 00:35:02.322 08:31:35 -- nvmf/common.sh@124 -- # return 0 00:35:02.322 08:31:35 -- nvmf/common.sh@477 -- # '[' -n 70892 ']' 00:35:02.322 08:31:35 -- nvmf/common.sh@478 -- # killprocess 70892 00:35:02.322 08:31:35 -- common/autotest_common.sh@926 -- # '[' -z 70892 ']' 00:35:02.322 08:31:35 -- common/autotest_common.sh@930 -- # kill -0 70892 00:35:02.322 08:31:35 -- common/autotest_common.sh@931 -- # uname 00:35:02.322 08:31:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:02.322 08:31:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70892 00:35:02.322 08:31:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:35:02.322 08:31:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:35:02.322 killing process with pid 70892 00:35:02.322 08:31:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70892' 00:35:02.322 08:31:35 -- common/autotest_common.sh@945 -- # kill 70892 00:35:02.322 08:31:35 -- common/autotest_common.sh@950 -- # wait 70892 00:35:02.322 08:31:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:35:02.322 08:31:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:35:02.322 08:31:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:35:02.322 08:31:35 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:02.322 08:31:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:35:02.322 08:31:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:02.322 08:31:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:02.322 08:31:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:02.322 08:31:35 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:35:02.322 00:35:02.322 real 0m15.240s 00:35:02.322 user 1m4.944s 00:35:02.322 sys 0m2.818s 00:35:02.322 08:31:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:02.322 08:31:35 -- common/autotest_common.sh@10 -- # set +x 00:35:02.322 ************************************ 00:35:02.322 END TEST nvmf_lvol 00:35:02.322 ************************************ 00:35:02.322 08:31:35 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:35:02.322 08:31:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:35:02.322 08:31:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:02.322 08:31:35 -- common/autotest_common.sh@10 -- # set +x 00:35:02.322 ************************************ 00:35:02.322 START TEST nvmf_lvs_grow 00:35:02.322 ************************************ 00:35:02.322 08:31:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:35:02.582 * Looking for test storage... 00:35:02.582 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:35:02.582 08:31:35 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:35:02.582 08:31:35 -- nvmf/common.sh@7 -- # uname -s 00:35:02.582 08:31:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:02.582 08:31:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:02.582 08:31:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:02.582 08:31:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:02.582 08:31:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:02.582 08:31:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:02.582 08:31:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:02.582 08:31:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:02.582 08:31:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:02.582 08:31:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:02.582 08:31:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:35:02.582 08:31:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:35:02.582 08:31:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:02.582 08:31:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:02.582 08:31:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:35:02.582 08:31:35 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:02.582 08:31:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:02.582 08:31:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:02.582 08:31:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:02.582 08:31:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.582 08:31:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.582 08:31:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.582 08:31:35 -- paths/export.sh@5 -- # export PATH 00:35:02.582 08:31:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.582 08:31:35 -- nvmf/common.sh@46 -- # : 0 00:35:02.582 08:31:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:35:02.582 08:31:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:35:02.582 08:31:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:35:02.582 08:31:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:02.582 08:31:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:02.582 08:31:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:35:02.582 08:31:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:35:02.582 08:31:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:35:02.582 08:31:35 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:02.582 08:31:35 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:02.582 08:31:35 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:35:02.582 08:31:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:35:02.582 08:31:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:02.582 08:31:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:35:02.582 08:31:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:35:02.582 08:31:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:35:02.582 08:31:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:02.582 08:31:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:02.582 08:31:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:02.582 08:31:35 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:35:02.582 08:31:35 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:35:02.582 08:31:35 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:35:02.582 08:31:35 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:35:02.582 08:31:35 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:35:02.582 08:31:35 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:35:02.582 08:31:35 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:02.582 08:31:35 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:02.582 08:31:35 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:35:02.582 08:31:35 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:35:02.582 08:31:35 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:35:02.583 08:31:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:35:02.583 08:31:35 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:35:02.583 08:31:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:02.583 08:31:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:35:02.583 08:31:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:35:02.583 08:31:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:35:02.583 08:31:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:35:02.583 08:31:35 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:35:02.583 08:31:35 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:35:02.583 Cannot find device "nvmf_tgt_br" 00:35:02.583 08:31:35 -- nvmf/common.sh@154 -- # true 00:35:02.583 08:31:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:35:02.583 Cannot find device "nvmf_tgt_br2" 00:35:02.583 08:31:35 -- nvmf/common.sh@155 -- # true 00:35:02.583 08:31:35 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:35:02.583 08:31:35 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:35:02.583 Cannot find device "nvmf_tgt_br" 00:35:02.583 08:31:35 -- nvmf/common.sh@157 -- # true 00:35:02.583 08:31:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:35:02.583 Cannot find device "nvmf_tgt_br2" 00:35:02.583 08:31:35 -- nvmf/common.sh@158 -- # true 00:35:02.583 08:31:35 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:35:02.583 08:31:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:35:02.583 08:31:35 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:02.583 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:02.583 08:31:35 -- nvmf/common.sh@161 -- # true 00:35:02.583 08:31:35 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:02.583 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:02.583 08:31:35 -- nvmf/common.sh@162 -- # true 00:35:02.583 08:31:35 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:35:02.583 08:31:35 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:35:02.583 08:31:35 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:35:02.842 08:31:35 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:35:02.842 08:31:35 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:35:02.843 08:31:35 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:35:02.843 08:31:35 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:35:02.843 08:31:35 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:35:02.843 08:31:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:35:02.843 08:31:35 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:35:02.843 08:31:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:35:02.843 08:31:35 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:35:02.843 08:31:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:35:02.843 08:31:35 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:35:02.843 08:31:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:35:02.843 08:31:35 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:35:02.843 08:31:35 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:35:02.843 08:31:35 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:35:02.843 08:31:35 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:35:02.843 08:31:36 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:35:02.843 08:31:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:35:02.843 08:31:36 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:35:02.843 08:31:36 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:35:02.843 08:31:36 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:35:02.843 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:02.843 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:35:02.843 00:35:02.843 --- 10.0.0.2 ping statistics --- 00:35:02.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:02.843 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:35:02.843 08:31:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:35:02.843 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:35:02.843 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:35:02.843 00:35:02.843 --- 10.0.0.3 ping statistics --- 00:35:02.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:02.843 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:35:02.843 08:31:36 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:35:02.843 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:02.843 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:35:02.843 00:35:02.843 --- 10.0.0.1 ping statistics --- 00:35:02.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:02.843 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:35:02.843 08:31:36 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:02.843 08:31:36 -- nvmf/common.sh@421 -- # return 0 00:35:02.843 08:31:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:35:02.843 08:31:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:02.843 08:31:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:35:02.843 08:31:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:35:02.843 08:31:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:02.843 08:31:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:35:02.843 08:31:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:35:02.843 08:31:36 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:35:02.843 08:31:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:35:02.843 08:31:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:35:02.843 08:31:36 -- common/autotest_common.sh@10 -- # set +x 00:35:02.843 08:31:36 -- nvmf/common.sh@469 -- # nvmfpid=71405 00:35:02.843 08:31:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:35:02.843 08:31:36 -- nvmf/common.sh@470 -- # waitforlisten 71405 00:35:02.843 08:31:36 -- common/autotest_common.sh@819 -- # '[' -z 71405 ']' 00:35:02.843 08:31:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:02.843 08:31:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:35:02.843 08:31:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:02.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:02.843 08:31:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:35:02.843 08:31:36 -- common/autotest_common.sh@10 -- # set +x 00:35:02.843 [2024-04-17 08:31:36.111849] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:35:02.843 [2024-04-17 08:31:36.111920] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:03.103 [2024-04-17 08:31:36.248571] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:03.103 [2024-04-17 08:31:36.337356] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:35:03.103 [2024-04-17 08:31:36.337518] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:03.103 [2024-04-17 08:31:36.337526] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:03.103 [2024-04-17 08:31:36.337531] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:03.103 [2024-04-17 08:31:36.337559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:03.672 08:31:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:35:03.672 08:31:36 -- common/autotest_common.sh@852 -- # return 0 00:35:03.672 08:31:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:35:03.672 08:31:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:03.672 08:31:36 -- common/autotest_common.sh@10 -- # set +x 00:35:03.672 08:31:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:03.672 08:31:36 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:03.931 [2024-04-17 08:31:37.214711] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:03.931 08:31:37 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:35:03.931 08:31:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:35:03.931 08:31:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:03.931 08:31:37 -- common/autotest_common.sh@10 -- # set +x 00:35:03.931 ************************************ 00:35:03.931 START TEST lvs_grow_clean 00:35:03.931 ************************************ 00:35:03.931 08:31:37 -- common/autotest_common.sh@1104 -- # lvs_grow 00:35:03.931 08:31:37 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:35:03.931 08:31:37 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:35:03.931 08:31:37 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:35:03.931 08:31:37 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:35:03.931 08:31:37 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:35:03.931 08:31:37 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:35:03.931 08:31:37 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:35:03.931 08:31:37 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:35:04.190 08:31:37 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:04.191 08:31:37 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:35:04.191 08:31:37 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:35:04.449 08:31:37 -- target/nvmf_lvs_grow.sh@28 -- # lvs=d16bf39d-8297-4eda-863f-96c4a00e0866 00:35:04.449 08:31:37 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d16bf39d-8297-4eda-863f-96c4a00e0866 00:35:04.450 08:31:37 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:35:04.732 08:31:37 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:35:04.732 08:31:37 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:35:04.732 08:31:37 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d16bf39d-8297-4eda-863f-96c4a00e0866 lvol 150 00:35:05.008 08:31:38 -- target/nvmf_lvs_grow.sh@33 -- # lvol=ed42968a-7a96-4c46-8ff3-4c72e4917453 00:35:05.008 08:31:38 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:35:05.008 08:31:38 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:35:05.267 [2024-04-17 08:31:38.360384] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:35:05.267 [2024-04-17 08:31:38.360456] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:35:05.267 true 00:35:05.267 08:31:38 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d16bf39d-8297-4eda-863f-96c4a00e0866 00:35:05.267 08:31:38 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:35:05.267 08:31:38 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:35:05.267 08:31:38 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:35:05.527 08:31:38 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ed42968a-7a96-4c46-8ff3-4c72e4917453 00:35:05.787 08:31:38 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:06.047 [2024-04-17 08:31:39.131315] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:06.047 08:31:39 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:06.047 08:31:39 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=71567 00:35:06.047 08:31:39 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:06.047 08:31:39 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 71567 /var/tmp/bdevperf.sock 00:35:06.047 08:31:39 -- common/autotest_common.sh@819 -- # '[' -z 71567 ']' 00:35:06.047 08:31:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:06.047 08:31:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:35:06.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:06.047 08:31:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:06.047 08:31:39 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:35:06.047 08:31:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:35:06.047 08:31:39 -- common/autotest_common.sh@10 -- # set +x 00:35:06.306 [2024-04-17 08:31:39.409377] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:35:06.306 [2024-04-17 08:31:39.409458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71567 ] 00:35:06.306 [2024-04-17 08:31:39.547069] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:06.564 [2024-04-17 08:31:39.647411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:07.132 08:31:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:35:07.132 08:31:40 -- common/autotest_common.sh@852 -- # return 0 00:35:07.133 08:31:40 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:35:07.392 Nvme0n1 00:35:07.392 08:31:40 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:35:07.392 [ 00:35:07.392 { 00:35:07.392 "aliases": [ 00:35:07.392 "ed42968a-7a96-4c46-8ff3-4c72e4917453" 00:35:07.392 ], 00:35:07.392 "assigned_rate_limits": { 00:35:07.392 "r_mbytes_per_sec": 0, 00:35:07.392 "rw_ios_per_sec": 0, 00:35:07.392 "rw_mbytes_per_sec": 0, 00:35:07.392 "w_mbytes_per_sec": 0 00:35:07.392 }, 00:35:07.392 "block_size": 4096, 00:35:07.392 "claimed": false, 00:35:07.392 "driver_specific": { 00:35:07.392 "mp_policy": "active_passive", 00:35:07.392 "nvme": [ 00:35:07.392 { 00:35:07.392 "ctrlr_data": { 00:35:07.392 "ana_reporting": false, 00:35:07.392 "cntlid": 1, 00:35:07.392 "firmware_revision": "24.01.1", 00:35:07.392 "model_number": "SPDK bdev Controller", 00:35:07.392 "multi_ctrlr": true, 00:35:07.392 "oacs": { 00:35:07.392 "firmware": 0, 00:35:07.392 "format": 0, 00:35:07.392 "ns_manage": 0, 00:35:07.392 "security": 0 00:35:07.392 }, 00:35:07.392 "serial_number": "SPDK0", 00:35:07.392 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:07.392 "vendor_id": "0x8086" 00:35:07.392 }, 00:35:07.392 "ns_data": { 00:35:07.392 "can_share": true, 00:35:07.392 "id": 1 00:35:07.392 }, 00:35:07.392 "trid": { 00:35:07.392 "adrfam": "IPv4", 00:35:07.392 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:07.392 "traddr": "10.0.0.2", 00:35:07.392 "trsvcid": "4420", 00:35:07.392 "trtype": "TCP" 00:35:07.392 }, 00:35:07.392 "vs": { 00:35:07.392 "nvme_version": "1.3" 00:35:07.392 } 00:35:07.392 } 00:35:07.392 ] 00:35:07.392 }, 00:35:07.392 "name": "Nvme0n1", 00:35:07.392 "num_blocks": 38912, 00:35:07.392 "product_name": "NVMe disk", 00:35:07.392 "supported_io_types": { 00:35:07.392 "abort": true, 00:35:07.392 "compare": true, 00:35:07.392 "compare_and_write": true, 00:35:07.392 "flush": true, 00:35:07.392 "nvme_admin": true, 00:35:07.392 "nvme_io": true, 00:35:07.392 "read": true, 00:35:07.392 "reset": true, 00:35:07.392 "unmap": true, 00:35:07.392 "write": true, 00:35:07.392 "write_zeroes": true 00:35:07.392 }, 00:35:07.392 "uuid": "ed42968a-7a96-4c46-8ff3-4c72e4917453", 00:35:07.392 "zoned": false 00:35:07.392 } 00:35:07.392 ] 00:35:07.392 08:31:40 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=71609 00:35:07.392 08:31:40 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:07.392 08:31:40 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:35:07.652 Running I/O for 10 seconds... 00:35:08.590 Latency(us) 00:35:08.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:08.590 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:08.590 Nvme0n1 : 1.00 10067.00 39.32 0.00 0.00 0.00 0.00 0.00 00:35:08.590 =================================================================================================================== 00:35:08.590 Total : 10067.00 39.32 0.00 0.00 0.00 0.00 0.00 00:35:08.590 00:35:09.576 08:31:42 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d16bf39d-8297-4eda-863f-96c4a00e0866 00:35:09.576 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:09.576 Nvme0n1 : 2.00 10375.00 40.53 0.00 0.00 0.00 0.00 0.00 00:35:09.576 =================================================================================================================== 00:35:09.576 Total : 10375.00 40.53 0.00 0.00 0.00 0.00 0.00 00:35:09.576 00:35:09.835 true 00:35:09.835 08:31:42 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d16bf39d-8297-4eda-863f-96c4a00e0866 00:35:09.835 08:31:42 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:35:10.098 08:31:43 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:35:10.098 08:31:43 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:35:10.098 08:31:43 -- target/nvmf_lvs_grow.sh@65 -- # wait 71609 00:35:10.667 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:10.667 Nvme0n1 : 3.00 10491.33 40.98 0.00 0.00 0.00 0.00 0.00 00:35:10.667 =================================================================================================================== 00:35:10.667 Total : 10491.33 40.98 0.00 0.00 0.00 0.00 0.00 00:35:10.667 00:35:11.601 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:11.601 Nvme0n1 : 4.00 10511.00 41.06 0.00 0.00 0.00 0.00 0.00 00:35:11.601 =================================================================================================================== 00:35:11.601 Total : 10511.00 41.06 0.00 0.00 0.00 0.00 0.00 00:35:11.601 00:35:12.536 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:12.536 Nvme0n1 : 5.00 10490.20 40.98 0.00 0.00 0.00 0.00 0.00 00:35:12.536 =================================================================================================================== 00:35:12.536 Total : 10490.20 40.98 0.00 0.00 0.00 0.00 0.00 00:35:12.536 00:35:13.473 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:13.473 Nvme0n1 : 6.00 10451.00 40.82 0.00 0.00 0.00 0.00 0.00 00:35:13.473 =================================================================================================================== 00:35:13.473 Total : 10451.00 40.82 0.00 0.00 0.00 0.00 0.00 00:35:13.473 00:35:14.860 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:14.860 Nvme0n1 : 7.00 10452.00 40.83 0.00 0.00 0.00 0.00 0.00 00:35:14.860 =================================================================================================================== 00:35:14.860 Total : 10452.00 40.83 0.00 0.00 0.00 0.00 0.00 00:35:14.860 00:35:15.797 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:15.797 Nvme0n1 : 8.00 10447.38 40.81 0.00 0.00 0.00 0.00 0.00 00:35:15.797 =================================================================================================================== 00:35:15.797 Total : 10447.38 40.81 0.00 0.00 0.00 0.00 0.00 00:35:15.797 00:35:16.733 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:16.733 Nvme0n1 : 9.00 10445.22 40.80 0.00 0.00 0.00 0.00 0.00 00:35:16.733 =================================================================================================================== 00:35:16.733 Total : 10445.22 40.80 0.00 0.00 0.00 0.00 0.00 00:35:16.733 00:35:17.672 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:17.672 Nvme0n1 : 10.00 10386.30 40.57 0.00 0.00 0.00 0.00 0.00 00:35:17.672 =================================================================================================================== 00:35:17.672 Total : 10386.30 40.57 0.00 0.00 0.00 0.00 0.00 00:35:17.672 00:35:17.672 00:35:17.672 Latency(us) 00:35:17.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:17.672 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:17.672 Nvme0n1 : 10.01 10393.78 40.60 0.00 0.00 12310.79 5609.19 41668.30 00:35:17.672 =================================================================================================================== 00:35:17.672 Total : 10393.78 40.60 0.00 0.00 12310.79 5609.19 41668.30 00:35:17.672 0 00:35:17.672 08:31:50 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 71567 00:35:17.672 08:31:50 -- common/autotest_common.sh@926 -- # '[' -z 71567 ']' 00:35:17.672 08:31:50 -- common/autotest_common.sh@930 -- # kill -0 71567 00:35:17.672 08:31:50 -- common/autotest_common.sh@931 -- # uname 00:35:17.672 08:31:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:17.672 08:31:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71567 00:35:17.672 killing process with pid 71567 00:35:17.672 Received shutdown signal, test time was about 10.000000 seconds 00:35:17.672 00:35:17.672 Latency(us) 00:35:17.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:17.672 =================================================================================================================== 00:35:17.672 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:17.672 08:31:50 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:35:17.672 08:31:50 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:35:17.672 08:31:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71567' 00:35:17.672 08:31:50 -- common/autotest_common.sh@945 -- # kill 71567 00:35:17.672 08:31:50 -- common/autotest_common.sh@950 -- # wait 71567 00:35:17.932 08:31:51 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:18.191 08:31:51 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d16bf39d-8297-4eda-863f-96c4a00e0866 00:35:18.192 08:31:51 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:35:18.192 08:31:51 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:35:18.192 08:31:51 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:35:18.192 08:31:51 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:35:18.451 [2024-04-17 08:31:51.679084] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:35:18.451 08:31:51 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d16bf39d-8297-4eda-863f-96c4a00e0866 00:35:18.451 08:31:51 -- common/autotest_common.sh@640 -- # local es=0 00:35:18.451 08:31:51 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d16bf39d-8297-4eda-863f-96c4a00e0866 00:35:18.451 08:31:51 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:18.451 08:31:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:35:18.451 08:31:51 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:18.451 08:31:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:35:18.451 08:31:51 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:18.451 08:31:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:35:18.451 08:31:51 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:18.451 08:31:51 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:35:18.451 08:31:51 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d16bf39d-8297-4eda-863f-96c4a00e0866 00:35:18.711 2024/04/17 08:31:51 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:d16bf39d-8297-4eda-863f-96c4a00e0866], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:35:18.711 request: 00:35:18.711 { 00:35:18.711 "method": "bdev_lvol_get_lvstores", 00:35:18.711 "params": { 00:35:18.711 "uuid": "d16bf39d-8297-4eda-863f-96c4a00e0866" 00:35:18.711 } 00:35:18.711 } 00:35:18.711 Got JSON-RPC error response 00:35:18.711 GoRPCClient: error on JSON-RPC call 00:35:18.711 08:31:51 -- common/autotest_common.sh@643 -- # es=1 00:35:18.711 08:31:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:35:18.711 08:31:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:35:18.711 08:31:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:35:18.711 08:31:51 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:18.970 aio_bdev 00:35:18.970 08:31:52 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev ed42968a-7a96-4c46-8ff3-4c72e4917453 00:35:18.970 08:31:52 -- common/autotest_common.sh@887 -- # local bdev_name=ed42968a-7a96-4c46-8ff3-4c72e4917453 00:35:18.970 08:31:52 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:35:18.970 08:31:52 -- common/autotest_common.sh@889 -- # local i 00:35:18.970 08:31:52 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:35:18.970 08:31:52 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:35:18.970 08:31:52 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:19.242 08:31:52 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ed42968a-7a96-4c46-8ff3-4c72e4917453 -t 2000 00:35:19.242 [ 00:35:19.242 { 00:35:19.242 "aliases": [ 00:35:19.242 "lvs/lvol" 00:35:19.242 ], 00:35:19.242 "assigned_rate_limits": { 00:35:19.242 "r_mbytes_per_sec": 0, 00:35:19.242 "rw_ios_per_sec": 0, 00:35:19.242 "rw_mbytes_per_sec": 0, 00:35:19.242 "w_mbytes_per_sec": 0 00:35:19.242 }, 00:35:19.242 "block_size": 4096, 00:35:19.242 "claimed": false, 00:35:19.242 "driver_specific": { 00:35:19.242 "lvol": { 00:35:19.242 "base_bdev": "aio_bdev", 00:35:19.242 "clone": false, 00:35:19.242 "esnap_clone": false, 00:35:19.242 "lvol_store_uuid": "d16bf39d-8297-4eda-863f-96c4a00e0866", 00:35:19.242 "snapshot": false, 00:35:19.242 "thin_provision": false 00:35:19.242 } 00:35:19.242 }, 00:35:19.242 "name": "ed42968a-7a96-4c46-8ff3-4c72e4917453", 00:35:19.242 "num_blocks": 38912, 00:35:19.242 "product_name": "Logical Volume", 00:35:19.242 "supported_io_types": { 00:35:19.242 "abort": false, 00:35:19.242 "compare": false, 00:35:19.242 "compare_and_write": false, 00:35:19.242 "flush": false, 00:35:19.242 "nvme_admin": false, 00:35:19.242 "nvme_io": false, 00:35:19.242 "read": true, 00:35:19.242 "reset": true, 00:35:19.242 "unmap": true, 00:35:19.242 "write": true, 00:35:19.242 "write_zeroes": true 00:35:19.242 }, 00:35:19.242 "uuid": "ed42968a-7a96-4c46-8ff3-4c72e4917453", 00:35:19.242 "zoned": false 00:35:19.242 } 00:35:19.242 ] 00:35:19.242 08:31:52 -- common/autotest_common.sh@895 -- # return 0 00:35:19.242 08:31:52 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:35:19.242 08:31:52 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d16bf39d-8297-4eda-863f-96c4a00e0866 00:35:19.514 08:31:52 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:35:19.514 08:31:52 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:35:19.514 08:31:52 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d16bf39d-8297-4eda-863f-96c4a00e0866 00:35:19.773 08:31:52 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:35:19.773 08:31:52 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ed42968a-7a96-4c46-8ff3-4c72e4917453 00:35:19.773 08:31:53 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d16bf39d-8297-4eda-863f-96c4a00e0866 00:35:20.033 08:31:53 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:35:20.292 08:31:53 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:35:20.861 ************************************ 00:35:20.861 END TEST lvs_grow_clean 00:35:20.861 ************************************ 00:35:20.861 00:35:20.861 real 0m16.646s 00:35:20.861 user 0m15.926s 00:35:20.861 sys 0m1.907s 00:35:20.861 08:31:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:20.861 08:31:53 -- common/autotest_common.sh@10 -- # set +x 00:35:20.861 08:31:53 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:35:20.861 08:31:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:35:20.861 08:31:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:20.861 08:31:53 -- common/autotest_common.sh@10 -- # set +x 00:35:20.861 ************************************ 00:35:20.861 START TEST lvs_grow_dirty 00:35:20.861 ************************************ 00:35:20.861 08:31:53 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:35:20.861 08:31:53 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:35:20.861 08:31:53 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:35:20.861 08:31:53 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:35:20.861 08:31:53 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:35:20.861 08:31:53 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:35:20.861 08:31:53 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:35:20.861 08:31:53 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:35:20.861 08:31:53 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:35:20.861 08:31:53 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:21.121 08:31:54 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:35:21.121 08:31:54 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:35:21.121 08:31:54 -- target/nvmf_lvs_grow.sh@28 -- # lvs=15425d98-8adb-4020-a63b-a90decde86f5 00:35:21.121 08:31:54 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 15425d98-8adb-4020-a63b-a90decde86f5 00:35:21.121 08:31:54 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:35:21.381 08:31:54 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:35:21.381 08:31:54 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:35:21.381 08:31:54 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 15425d98-8adb-4020-a63b-a90decde86f5 lvol 150 00:35:21.640 08:31:54 -- target/nvmf_lvs_grow.sh@33 -- # lvol=d40fb6d1-d509-47ae-8589-6b70a0d7db92 00:35:21.640 08:31:54 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:35:21.640 08:31:54 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:35:21.899 [2024-04-17 08:31:55.020646] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:35:21.899 [2024-04-17 08:31:55.020717] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:35:21.899 true 00:35:21.899 08:31:55 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:35:21.899 08:31:55 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 15425d98-8adb-4020-a63b-a90decde86f5 00:35:22.158 08:31:55 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:35:22.158 08:31:55 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:35:22.158 08:31:55 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d40fb6d1-d509-47ae-8589-6b70a0d7db92 00:35:22.418 08:31:55 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:22.677 08:31:55 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:22.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:22.937 08:31:56 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:35:22.937 08:31:56 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=71989 00:35:22.937 08:31:56 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:22.937 08:31:56 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 71989 /var/tmp/bdevperf.sock 00:35:22.937 08:31:56 -- common/autotest_common.sh@819 -- # '[' -z 71989 ']' 00:35:22.937 08:31:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:22.937 08:31:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:35:22.937 08:31:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:22.937 08:31:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:35:22.937 08:31:56 -- common/autotest_common.sh@10 -- # set +x 00:35:22.937 [2024-04-17 08:31:56.101612] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:35:22.937 [2024-04-17 08:31:56.101678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71989 ] 00:35:22.937 [2024-04-17 08:31:56.227167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:23.197 [2024-04-17 08:31:56.376582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:23.765 08:31:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:35:23.765 08:31:57 -- common/autotest_common.sh@852 -- # return 0 00:35:23.765 08:31:57 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:35:24.025 Nvme0n1 00:35:24.025 08:31:57 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:35:24.284 [ 00:35:24.284 { 00:35:24.284 "aliases": [ 00:35:24.284 "d40fb6d1-d509-47ae-8589-6b70a0d7db92" 00:35:24.284 ], 00:35:24.284 "assigned_rate_limits": { 00:35:24.284 "r_mbytes_per_sec": 0, 00:35:24.284 "rw_ios_per_sec": 0, 00:35:24.284 "rw_mbytes_per_sec": 0, 00:35:24.284 "w_mbytes_per_sec": 0 00:35:24.284 }, 00:35:24.284 "block_size": 4096, 00:35:24.284 "claimed": false, 00:35:24.284 "driver_specific": { 00:35:24.284 "mp_policy": "active_passive", 00:35:24.284 "nvme": [ 00:35:24.284 { 00:35:24.284 "ctrlr_data": { 00:35:24.284 "ana_reporting": false, 00:35:24.284 "cntlid": 1, 00:35:24.284 "firmware_revision": "24.01.1", 00:35:24.284 "model_number": "SPDK bdev Controller", 00:35:24.284 "multi_ctrlr": true, 00:35:24.284 "oacs": { 00:35:24.284 "firmware": 0, 00:35:24.284 "format": 0, 00:35:24.284 "ns_manage": 0, 00:35:24.284 "security": 0 00:35:24.284 }, 00:35:24.284 "serial_number": "SPDK0", 00:35:24.284 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:24.284 "vendor_id": "0x8086" 00:35:24.284 }, 00:35:24.284 "ns_data": { 00:35:24.284 "can_share": true, 00:35:24.284 "id": 1 00:35:24.284 }, 00:35:24.284 "trid": { 00:35:24.284 "adrfam": "IPv4", 00:35:24.284 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:24.284 "traddr": "10.0.0.2", 00:35:24.284 "trsvcid": "4420", 00:35:24.284 "trtype": "TCP" 00:35:24.284 }, 00:35:24.284 "vs": { 00:35:24.284 "nvme_version": "1.3" 00:35:24.284 } 00:35:24.284 } 00:35:24.285 ] 00:35:24.285 }, 00:35:24.285 "name": "Nvme0n1", 00:35:24.285 "num_blocks": 38912, 00:35:24.285 "product_name": "NVMe disk", 00:35:24.285 "supported_io_types": { 00:35:24.285 "abort": true, 00:35:24.285 "compare": true, 00:35:24.285 "compare_and_write": true, 00:35:24.285 "flush": true, 00:35:24.285 "nvme_admin": true, 00:35:24.285 "nvme_io": true, 00:35:24.285 "read": true, 00:35:24.285 "reset": true, 00:35:24.285 "unmap": true, 00:35:24.285 "write": true, 00:35:24.285 "write_zeroes": true 00:35:24.285 }, 00:35:24.285 "uuid": "d40fb6d1-d509-47ae-8589-6b70a0d7db92", 00:35:24.285 "zoned": false 00:35:24.285 } 00:35:24.285 ] 00:35:24.285 08:31:57 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:24.285 08:31:57 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=72039 00:35:24.285 08:31:57 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:35:24.285 Running I/O for 10 seconds... 00:35:25.279 Latency(us) 00:35:25.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:25.279 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:25.279 Nvme0n1 : 1.00 10705.00 41.82 0.00 0.00 0.00 0.00 0.00 00:35:25.279 =================================================================================================================== 00:35:25.279 Total : 10705.00 41.82 0.00 0.00 0.00 0.00 0.00 00:35:25.279 00:35:26.218 08:31:59 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 15425d98-8adb-4020-a63b-a90decde86f5 00:35:26.477 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:26.477 Nvme0n1 : 2.00 10719.00 41.87 0.00 0.00 0.00 0.00 0.00 00:35:26.477 =================================================================================================================== 00:35:26.477 Total : 10719.00 41.87 0.00 0.00 0.00 0.00 0.00 00:35:26.477 00:35:26.477 true 00:35:26.477 08:31:59 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 15425d98-8adb-4020-a63b-a90decde86f5 00:35:26.477 08:31:59 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:35:26.737 08:32:00 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:35:26.737 08:32:00 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:35:26.737 08:32:00 -- target/nvmf_lvs_grow.sh@65 -- # wait 72039 00:35:27.305 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:27.305 Nvme0n1 : 3.00 10556.67 41.24 0.00 0.00 0.00 0.00 0.00 00:35:27.305 =================================================================================================================== 00:35:27.305 Total : 10556.67 41.24 0.00 0.00 0.00 0.00 0.00 00:35:27.305 00:35:28.680 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:28.680 Nvme0n1 : 4.00 10468.50 40.89 0.00 0.00 0.00 0.00 0.00 00:35:28.680 =================================================================================================================== 00:35:28.680 Total : 10468.50 40.89 0.00 0.00 0.00 0.00 0.00 00:35:28.680 00:35:29.619 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:29.619 Nvme0n1 : 5.00 10491.60 40.98 0.00 0.00 0.00 0.00 0.00 00:35:29.619 =================================================================================================================== 00:35:29.619 Total : 10491.60 40.98 0.00 0.00 0.00 0.00 0.00 00:35:29.619 00:35:30.561 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:30.561 Nvme0n1 : 6.00 10501.00 41.02 0.00 0.00 0.00 0.00 0.00 00:35:30.561 =================================================================================================================== 00:35:30.561 Total : 10501.00 41.02 0.00 0.00 0.00 0.00 0.00 00:35:30.561 00:35:31.504 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:31.504 Nvme0n1 : 7.00 9995.71 39.05 0.00 0.00 0.00 0.00 0.00 00:35:31.504 =================================================================================================================== 00:35:31.504 Total : 9995.71 39.05 0.00 0.00 0.00 0.00 0.00 00:35:31.504 00:35:32.442 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:32.442 Nvme0n1 : 8.00 9170.75 35.82 0.00 0.00 0.00 0.00 0.00 00:35:32.442 =================================================================================================================== 00:35:32.442 Total : 9170.75 35.82 0.00 0.00 0.00 0.00 0.00 00:35:32.442 00:35:33.378 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:33.378 Nvme0n1 : 9.00 9209.11 35.97 0.00 0.00 0.00 0.00 0.00 00:35:33.378 =================================================================================================================== 00:35:33.378 Total : 9209.11 35.97 0.00 0.00 0.00 0.00 0.00 00:35:33.378 00:35:34.313 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:34.313 Nvme0n1 : 10.00 9195.30 35.92 0.00 0.00 0.00 0.00 0.00 00:35:34.313 =================================================================================================================== 00:35:34.313 Total : 9195.30 35.92 0.00 0.00 0.00 0.00 0.00 00:35:34.313 00:35:34.313 00:35:34.313 Latency(us) 00:35:34.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:34.313 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:34.313 Nvme0n1 : 10.01 9201.46 35.94 0.00 0.00 13907.25 4779.26 1025681.33 00:35:34.313 =================================================================================================================== 00:35:34.313 Total : 9201.46 35.94 0.00 0.00 13907.25 4779.26 1025681.33 00:35:34.313 0 00:35:34.313 08:32:07 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 71989 00:35:34.313 08:32:07 -- common/autotest_common.sh@926 -- # '[' -z 71989 ']' 00:35:34.313 08:32:07 -- common/autotest_common.sh@930 -- # kill -0 71989 00:35:34.313 08:32:07 -- common/autotest_common.sh@931 -- # uname 00:35:34.313 08:32:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:34.313 08:32:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71989 00:35:34.313 08:32:07 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:35:34.313 08:32:07 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:35:34.313 08:32:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71989' 00:35:34.313 killing process with pid 71989 00:35:34.313 08:32:07 -- common/autotest_common.sh@945 -- # kill 71989 00:35:34.313 Received shutdown signal, test time was about 10.000000 seconds 00:35:34.313 00:35:34.313 Latency(us) 00:35:34.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:34.313 =================================================================================================================== 00:35:34.313 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:34.313 08:32:07 -- common/autotest_common.sh@950 -- # wait 71989 00:35:34.573 08:32:07 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:34.831 08:32:08 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 15425d98-8adb-4020-a63b-a90decde86f5 00:35:34.831 08:32:08 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:35:35.090 08:32:08 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:35:35.090 08:32:08 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:35:35.090 08:32:08 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 71405 00:35:35.090 08:32:08 -- target/nvmf_lvs_grow.sh@74 -- # wait 71405 00:35:35.090 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 71405 Killed "${NVMF_APP[@]}" "$@" 00:35:35.090 08:32:08 -- target/nvmf_lvs_grow.sh@74 -- # true 00:35:35.090 08:32:08 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:35:35.090 08:32:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:35:35.090 08:32:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:35:35.090 08:32:08 -- common/autotest_common.sh@10 -- # set +x 00:35:35.090 08:32:08 -- nvmf/common.sh@469 -- # nvmfpid=72188 00:35:35.090 08:32:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:35:35.090 08:32:08 -- nvmf/common.sh@470 -- # waitforlisten 72188 00:35:35.090 08:32:08 -- common/autotest_common.sh@819 -- # '[' -z 72188 ']' 00:35:35.090 08:32:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:35.090 08:32:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:35:35.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:35.090 08:32:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:35.090 08:32:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:35:35.090 08:32:08 -- common/autotest_common.sh@10 -- # set +x 00:35:35.349 [2024-04-17 08:32:08.442484] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:35:35.350 [2024-04-17 08:32:08.442552] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:35.350 [2024-04-17 08:32:08.582083] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:35.608 [2024-04-17 08:32:08.683212] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:35:35.608 [2024-04-17 08:32:08.683347] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:35.608 [2024-04-17 08:32:08.683355] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:35.608 [2024-04-17 08:32:08.683360] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:35.608 [2024-04-17 08:32:08.683382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:36.177 08:32:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:35:36.177 08:32:09 -- common/autotest_common.sh@852 -- # return 0 00:35:36.177 08:32:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:35:36.177 08:32:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:36.177 08:32:09 -- common/autotest_common.sh@10 -- # set +x 00:35:36.177 08:32:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:36.177 08:32:09 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:36.438 [2024-04-17 08:32:09.630574] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:35:36.438 [2024-04-17 08:32:09.631513] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:35:36.438 [2024-04-17 08:32:09.631667] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:35:36.438 08:32:09 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:35:36.438 08:32:09 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev d40fb6d1-d509-47ae-8589-6b70a0d7db92 00:35:36.438 08:32:09 -- common/autotest_common.sh@887 -- # local bdev_name=d40fb6d1-d509-47ae-8589-6b70a0d7db92 00:35:36.438 08:32:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:35:36.438 08:32:09 -- common/autotest_common.sh@889 -- # local i 00:35:36.438 08:32:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:35:36.438 08:32:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:35:36.438 08:32:09 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:36.697 08:32:09 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d40fb6d1-d509-47ae-8589-6b70a0d7db92 -t 2000 00:35:36.957 [ 00:35:36.957 { 00:35:36.957 "aliases": [ 00:35:36.957 "lvs/lvol" 00:35:36.957 ], 00:35:36.957 "assigned_rate_limits": { 00:35:36.957 "r_mbytes_per_sec": 0, 00:35:36.957 "rw_ios_per_sec": 0, 00:35:36.957 "rw_mbytes_per_sec": 0, 00:35:36.957 "w_mbytes_per_sec": 0 00:35:36.957 }, 00:35:36.957 "block_size": 4096, 00:35:36.957 "claimed": false, 00:35:36.957 "driver_specific": { 00:35:36.957 "lvol": { 00:35:36.957 "base_bdev": "aio_bdev", 00:35:36.957 "clone": false, 00:35:36.957 "esnap_clone": false, 00:35:36.957 "lvol_store_uuid": "15425d98-8adb-4020-a63b-a90decde86f5", 00:35:36.957 "snapshot": false, 00:35:36.957 "thin_provision": false 00:35:36.957 } 00:35:36.957 }, 00:35:36.957 "name": "d40fb6d1-d509-47ae-8589-6b70a0d7db92", 00:35:36.957 "num_blocks": 38912, 00:35:36.957 "product_name": "Logical Volume", 00:35:36.957 "supported_io_types": { 00:35:36.957 "abort": false, 00:35:36.957 "compare": false, 00:35:36.957 "compare_and_write": false, 00:35:36.957 "flush": false, 00:35:36.957 "nvme_admin": false, 00:35:36.957 "nvme_io": false, 00:35:36.957 "read": true, 00:35:36.957 "reset": true, 00:35:36.957 "unmap": true, 00:35:36.957 "write": true, 00:35:36.957 "write_zeroes": true 00:35:36.957 }, 00:35:36.957 "uuid": "d40fb6d1-d509-47ae-8589-6b70a0d7db92", 00:35:36.957 "zoned": false 00:35:36.957 } 00:35:36.957 ] 00:35:36.957 08:32:10 -- common/autotest_common.sh@895 -- # return 0 00:35:36.957 08:32:10 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:35:36.957 08:32:10 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 15425d98-8adb-4020-a63b-a90decde86f5 00:35:37.218 08:32:10 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:35:37.218 08:32:10 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:35:37.218 08:32:10 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 15425d98-8adb-4020-a63b-a90decde86f5 00:35:37.218 08:32:10 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:35:37.218 08:32:10 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:35:37.479 [2024-04-17 08:32:10.734488] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:35:37.479 08:32:10 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 15425d98-8adb-4020-a63b-a90decde86f5 00:35:37.479 08:32:10 -- common/autotest_common.sh@640 -- # local es=0 00:35:37.479 08:32:10 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 15425d98-8adb-4020-a63b-a90decde86f5 00:35:37.479 08:32:10 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:37.479 08:32:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:35:37.479 08:32:10 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:37.479 08:32:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:35:37.479 08:32:10 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:37.479 08:32:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:35:37.479 08:32:10 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:37.479 08:32:10 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:35:37.479 08:32:10 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 15425d98-8adb-4020-a63b-a90decde86f5 00:35:37.738 2024/04/17 08:32:10 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:15425d98-8adb-4020-a63b-a90decde86f5], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:35:37.738 request: 00:35:37.738 { 00:35:37.738 "method": "bdev_lvol_get_lvstores", 00:35:37.738 "params": { 00:35:37.738 "uuid": "15425d98-8adb-4020-a63b-a90decde86f5" 00:35:37.738 } 00:35:37.738 } 00:35:37.738 Got JSON-RPC error response 00:35:37.738 GoRPCClient: error on JSON-RPC call 00:35:37.738 08:32:10 -- common/autotest_common.sh@643 -- # es=1 00:35:37.738 08:32:10 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:35:37.738 08:32:10 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:35:37.738 08:32:10 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:35:37.738 08:32:10 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:37.998 aio_bdev 00:35:37.998 08:32:11 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev d40fb6d1-d509-47ae-8589-6b70a0d7db92 00:35:37.998 08:32:11 -- common/autotest_common.sh@887 -- # local bdev_name=d40fb6d1-d509-47ae-8589-6b70a0d7db92 00:35:37.998 08:32:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:35:37.998 08:32:11 -- common/autotest_common.sh@889 -- # local i 00:35:37.998 08:32:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:35:37.998 08:32:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:35:37.998 08:32:11 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:38.258 08:32:11 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d40fb6d1-d509-47ae-8589-6b70a0d7db92 -t 2000 00:35:38.517 [ 00:35:38.517 { 00:35:38.517 "aliases": [ 00:35:38.517 "lvs/lvol" 00:35:38.517 ], 00:35:38.517 "assigned_rate_limits": { 00:35:38.517 "r_mbytes_per_sec": 0, 00:35:38.517 "rw_ios_per_sec": 0, 00:35:38.517 "rw_mbytes_per_sec": 0, 00:35:38.517 "w_mbytes_per_sec": 0 00:35:38.517 }, 00:35:38.517 "block_size": 4096, 00:35:38.517 "claimed": false, 00:35:38.517 "driver_specific": { 00:35:38.517 "lvol": { 00:35:38.517 "base_bdev": "aio_bdev", 00:35:38.517 "clone": false, 00:35:38.517 "esnap_clone": false, 00:35:38.517 "lvol_store_uuid": "15425d98-8adb-4020-a63b-a90decde86f5", 00:35:38.517 "snapshot": false, 00:35:38.517 "thin_provision": false 00:35:38.517 } 00:35:38.517 }, 00:35:38.517 "name": "d40fb6d1-d509-47ae-8589-6b70a0d7db92", 00:35:38.517 "num_blocks": 38912, 00:35:38.517 "product_name": "Logical Volume", 00:35:38.517 "supported_io_types": { 00:35:38.517 "abort": false, 00:35:38.517 "compare": false, 00:35:38.517 "compare_and_write": false, 00:35:38.517 "flush": false, 00:35:38.517 "nvme_admin": false, 00:35:38.517 "nvme_io": false, 00:35:38.517 "read": true, 00:35:38.517 "reset": true, 00:35:38.517 "unmap": true, 00:35:38.517 "write": true, 00:35:38.517 "write_zeroes": true 00:35:38.517 }, 00:35:38.517 "uuid": "d40fb6d1-d509-47ae-8589-6b70a0d7db92", 00:35:38.517 "zoned": false 00:35:38.517 } 00:35:38.517 ] 00:35:38.517 08:32:11 -- common/autotest_common.sh@895 -- # return 0 00:35:38.517 08:32:11 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 15425d98-8adb-4020-a63b-a90decde86f5 00:35:38.517 08:32:11 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:35:38.775 08:32:11 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:35:38.775 08:32:11 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:35:38.775 08:32:11 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 15425d98-8adb-4020-a63b-a90decde86f5 00:35:39.034 08:32:12 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:35:39.034 08:32:12 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete d40fb6d1-d509-47ae-8589-6b70a0d7db92 00:35:39.034 08:32:12 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 15425d98-8adb-4020-a63b-a90decde86f5 00:35:39.292 08:32:12 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:35:39.551 08:32:12 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:35:40.118 00:35:40.118 real 0m19.265s 00:35:40.118 user 0m39.485s 00:35:40.118 sys 0m6.891s 00:35:40.118 08:32:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:40.118 08:32:13 -- common/autotest_common.sh@10 -- # set +x 00:35:40.118 ************************************ 00:35:40.118 END TEST lvs_grow_dirty 00:35:40.118 ************************************ 00:35:40.118 08:32:13 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:35:40.118 08:32:13 -- common/autotest_common.sh@796 -- # type=--id 00:35:40.118 08:32:13 -- common/autotest_common.sh@797 -- # id=0 00:35:40.118 08:32:13 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:35:40.118 08:32:13 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:35:40.118 08:32:13 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:35:40.118 08:32:13 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:35:40.118 08:32:13 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:35:40.118 08:32:13 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:35:40.118 nvmf_trace.0 00:35:40.118 08:32:13 -- common/autotest_common.sh@811 -- # return 0 00:35:40.118 08:32:13 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:35:40.118 08:32:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:35:40.118 08:32:13 -- nvmf/common.sh@116 -- # sync 00:35:40.118 08:32:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:35:40.118 08:32:13 -- nvmf/common.sh@119 -- # set +e 00:35:40.118 08:32:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:35:40.118 08:32:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:35:40.118 rmmod nvme_tcp 00:35:40.376 rmmod nvme_fabrics 00:35:40.376 rmmod nvme_keyring 00:35:40.376 08:32:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:35:40.376 08:32:13 -- nvmf/common.sh@123 -- # set -e 00:35:40.376 08:32:13 -- nvmf/common.sh@124 -- # return 0 00:35:40.376 08:32:13 -- nvmf/common.sh@477 -- # '[' -n 72188 ']' 00:35:40.376 08:32:13 -- nvmf/common.sh@478 -- # killprocess 72188 00:35:40.376 08:32:13 -- common/autotest_common.sh@926 -- # '[' -z 72188 ']' 00:35:40.376 08:32:13 -- common/autotest_common.sh@930 -- # kill -0 72188 00:35:40.376 08:32:13 -- common/autotest_common.sh@931 -- # uname 00:35:40.376 08:32:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:40.376 08:32:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72188 00:35:40.376 08:32:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:35:40.376 08:32:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:35:40.376 08:32:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72188' 00:35:40.376 killing process with pid 72188 00:35:40.376 08:32:13 -- common/autotest_common.sh@945 -- # kill 72188 00:35:40.376 08:32:13 -- common/autotest_common.sh@950 -- # wait 72188 00:35:40.635 08:32:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:35:40.635 08:32:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:35:40.635 08:32:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:35:40.635 08:32:13 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:40.635 08:32:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:35:40.635 08:32:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:40.635 08:32:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:40.635 08:32:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:40.635 08:32:13 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:35:40.635 00:35:40.635 real 0m38.273s 00:35:40.635 user 1m1.274s 00:35:40.635 sys 0m9.508s 00:35:40.635 ************************************ 00:35:40.635 END TEST nvmf_lvs_grow 00:35:40.635 ************************************ 00:35:40.635 08:32:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:40.635 08:32:13 -- common/autotest_common.sh@10 -- # set +x 00:35:40.635 08:32:13 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:35:40.635 08:32:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:35:40.635 08:32:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:40.635 08:32:13 -- common/autotest_common.sh@10 -- # set +x 00:35:40.635 ************************************ 00:35:40.635 START TEST nvmf_bdev_io_wait 00:35:40.635 ************************************ 00:35:40.635 08:32:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:35:40.896 * Looking for test storage... 00:35:40.896 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:35:40.896 08:32:14 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:35:40.896 08:32:14 -- nvmf/common.sh@7 -- # uname -s 00:35:40.896 08:32:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:40.896 08:32:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:40.896 08:32:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:40.896 08:32:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:40.896 08:32:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:40.896 08:32:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:40.896 08:32:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:40.896 08:32:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:40.896 08:32:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:40.896 08:32:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:40.896 08:32:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:35:40.896 08:32:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:35:40.896 08:32:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:40.896 08:32:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:40.896 08:32:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:35:40.896 08:32:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:40.896 08:32:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:40.896 08:32:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:40.896 08:32:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:40.896 08:32:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:40.896 08:32:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:40.896 08:32:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:40.896 08:32:14 -- paths/export.sh@5 -- # export PATH 00:35:40.896 08:32:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:40.896 08:32:14 -- nvmf/common.sh@46 -- # : 0 00:35:40.896 08:32:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:35:40.896 08:32:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:35:40.896 08:32:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:35:40.896 08:32:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:40.896 08:32:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:40.896 08:32:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:35:40.896 08:32:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:35:40.896 08:32:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:35:40.896 08:32:14 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:40.896 08:32:14 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:40.896 08:32:14 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:35:40.896 08:32:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:35:40.896 08:32:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:40.896 08:32:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:35:40.896 08:32:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:35:40.896 08:32:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:35:40.896 08:32:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:40.896 08:32:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:40.896 08:32:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:40.896 08:32:14 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:35:40.896 08:32:14 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:35:40.896 08:32:14 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:35:40.896 08:32:14 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:35:40.896 08:32:14 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:35:40.896 08:32:14 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:35:40.896 08:32:14 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:40.896 08:32:14 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:40.896 08:32:14 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:35:40.896 08:32:14 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:35:40.896 08:32:14 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:35:40.896 08:32:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:35:40.896 08:32:14 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:35:40.896 08:32:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:40.896 08:32:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:35:40.896 08:32:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:35:40.896 08:32:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:35:40.896 08:32:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:35:40.896 08:32:14 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:35:40.896 08:32:14 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:35:40.896 Cannot find device "nvmf_tgt_br" 00:35:40.896 08:32:14 -- nvmf/common.sh@154 -- # true 00:35:40.896 08:32:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:35:40.896 Cannot find device "nvmf_tgt_br2" 00:35:40.896 08:32:14 -- nvmf/common.sh@155 -- # true 00:35:40.896 08:32:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:35:40.896 08:32:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:35:40.896 Cannot find device "nvmf_tgt_br" 00:35:40.896 08:32:14 -- nvmf/common.sh@157 -- # true 00:35:40.896 08:32:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:35:40.896 Cannot find device "nvmf_tgt_br2" 00:35:40.896 08:32:14 -- nvmf/common.sh@158 -- # true 00:35:40.896 08:32:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:35:40.896 08:32:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:35:40.896 08:32:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:40.896 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:41.157 08:32:14 -- nvmf/common.sh@161 -- # true 00:35:41.157 08:32:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:41.157 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:41.157 08:32:14 -- nvmf/common.sh@162 -- # true 00:35:41.157 08:32:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:35:41.157 08:32:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:35:41.157 08:32:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:35:41.157 08:32:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:35:41.157 08:32:14 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:35:41.157 08:32:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:35:41.157 08:32:14 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:35:41.157 08:32:14 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:35:41.157 08:32:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:35:41.157 08:32:14 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:35:41.157 08:32:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:35:41.157 08:32:14 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:35:41.157 08:32:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:35:41.157 08:32:14 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:35:41.157 08:32:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:35:41.157 08:32:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:35:41.157 08:32:14 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:35:41.157 08:32:14 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:35:41.157 08:32:14 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:35:41.157 08:32:14 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:35:41.157 08:32:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:35:41.157 08:32:14 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:35:41.157 08:32:14 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:35:41.157 08:32:14 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:35:41.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:41.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:35:41.157 00:35:41.157 --- 10.0.0.2 ping statistics --- 00:35:41.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:41.157 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:35:41.157 08:32:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:35:41.157 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:35:41.157 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:35:41.157 00:35:41.157 --- 10.0.0.3 ping statistics --- 00:35:41.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:41.157 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:35:41.157 08:32:14 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:35:41.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:41.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:35:41.157 00:35:41.157 --- 10.0.0.1 ping statistics --- 00:35:41.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:41.157 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:35:41.157 08:32:14 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:41.157 08:32:14 -- nvmf/common.sh@421 -- # return 0 00:35:41.157 08:32:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:35:41.157 08:32:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:41.157 08:32:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:35:41.157 08:32:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:35:41.157 08:32:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:41.157 08:32:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:35:41.157 08:32:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:35:41.157 08:32:14 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:35:41.157 08:32:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:35:41.157 08:32:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:35:41.157 08:32:14 -- common/autotest_common.sh@10 -- # set +x 00:35:41.157 08:32:14 -- nvmf/common.sh@469 -- # nvmfpid=72598 00:35:41.157 08:32:14 -- nvmf/common.sh@470 -- # waitforlisten 72598 00:35:41.157 08:32:14 -- common/autotest_common.sh@819 -- # '[' -z 72598 ']' 00:35:41.157 08:32:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:41.157 08:32:14 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:41.157 08:32:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:35:41.157 08:32:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:41.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:41.157 08:32:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:35:41.157 08:32:14 -- common/autotest_common.sh@10 -- # set +x 00:35:41.415 [2024-04-17 08:32:14.525449] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:35:41.415 [2024-04-17 08:32:14.525525] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:41.415 [2024-04-17 08:32:14.664100] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:41.675 [2024-04-17 08:32:14.768530] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:35:41.675 [2024-04-17 08:32:14.768674] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:41.675 [2024-04-17 08:32:14.768682] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:41.675 [2024-04-17 08:32:14.768689] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:41.675 [2024-04-17 08:32:14.768775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:41.675 [2024-04-17 08:32:14.768936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:41.675 [2024-04-17 08:32:14.768962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:35:41.675 [2024-04-17 08:32:14.768966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:42.243 08:32:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:35:42.243 08:32:15 -- common/autotest_common.sh@852 -- # return 0 00:35:42.243 08:32:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:35:42.243 08:32:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:42.243 08:32:15 -- common/autotest_common.sh@10 -- # set +x 00:35:42.243 08:32:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:42.243 08:32:15 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:35:42.243 08:32:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:42.243 08:32:15 -- common/autotest_common.sh@10 -- # set +x 00:35:42.243 08:32:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:42.243 08:32:15 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:35:42.243 08:32:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:42.243 08:32:15 -- common/autotest_common.sh@10 -- # set +x 00:35:42.243 08:32:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:42.243 08:32:15 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:42.243 08:32:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:42.243 08:32:15 -- common/autotest_common.sh@10 -- # set +x 00:35:42.243 [2024-04-17 08:32:15.557735] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:42.243 08:32:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:42.243 08:32:15 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:42.243 08:32:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:42.243 08:32:15 -- common/autotest_common.sh@10 -- # set +x 00:35:42.502 Malloc0 00:35:42.502 08:32:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:42.502 08:32:15 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:42.502 08:32:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:42.502 08:32:15 -- common/autotest_common.sh@10 -- # set +x 00:35:42.502 08:32:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:42.502 08:32:15 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:42.502 08:32:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:42.502 08:32:15 -- common/autotest_common.sh@10 -- # set +x 00:35:42.502 08:32:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:42.502 08:32:15 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:42.502 08:32:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:42.502 08:32:15 -- common/autotest_common.sh@10 -- # set +x 00:35:42.502 [2024-04-17 08:32:15.604636] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:42.502 08:32:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:42.502 08:32:15 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=72652 00:35:42.502 08:32:15 -- target/bdev_io_wait.sh@30 -- # READ_PID=72654 00:35:42.502 08:32:15 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=72657 00:35:42.502 08:32:15 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:35:42.502 08:32:15 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:35:42.502 08:32:15 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:35:42.502 08:32:15 -- nvmf/common.sh@520 -- # config=() 00:35:42.502 08:32:15 -- nvmf/common.sh@520 -- # local subsystem config 00:35:42.502 08:32:15 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:35:42.502 08:32:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:35:42.502 08:32:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:35:42.502 { 00:35:42.502 "params": { 00:35:42.502 "name": "Nvme$subsystem", 00:35:42.502 "trtype": "$TEST_TRANSPORT", 00:35:42.502 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:42.502 "adrfam": "ipv4", 00:35:42.502 "trsvcid": "$NVMF_PORT", 00:35:42.502 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:42.502 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:42.502 "hdgst": ${hdgst:-false}, 00:35:42.502 "ddgst": ${ddgst:-false} 00:35:42.502 }, 00:35:42.502 "method": "bdev_nvme_attach_controller" 00:35:42.502 } 00:35:42.502 EOF 00:35:42.502 )") 00:35:42.502 08:32:15 -- nvmf/common.sh@520 -- # config=() 00:35:42.502 08:32:15 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:35:42.502 08:32:15 -- nvmf/common.sh@520 -- # local subsystem config 00:35:42.502 08:32:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:35:42.502 08:32:15 -- nvmf/common.sh@520 -- # config=() 00:35:42.502 08:32:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:35:42.502 { 00:35:42.502 "params": { 00:35:42.502 "name": "Nvme$subsystem", 00:35:42.502 "trtype": "$TEST_TRANSPORT", 00:35:42.502 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:42.502 "adrfam": "ipv4", 00:35:42.502 "trsvcid": "$NVMF_PORT", 00:35:42.502 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:42.502 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:42.502 "hdgst": ${hdgst:-false}, 00:35:42.502 "ddgst": ${ddgst:-false} 00:35:42.502 }, 00:35:42.502 "method": "bdev_nvme_attach_controller" 00:35:42.502 } 00:35:42.502 EOF 00:35:42.502 )") 00:35:42.502 08:32:15 -- nvmf/common.sh@520 -- # local subsystem config 00:35:42.502 08:32:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:35:42.502 08:32:15 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:35:42.502 08:32:15 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=72663 00:35:42.502 08:32:15 -- target/bdev_io_wait.sh@35 -- # sync 00:35:42.502 08:32:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:35:42.502 { 00:35:42.502 "params": { 00:35:42.502 "name": "Nvme$subsystem", 00:35:42.502 "trtype": "$TEST_TRANSPORT", 00:35:42.502 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:42.502 "adrfam": "ipv4", 00:35:42.502 "trsvcid": "$NVMF_PORT", 00:35:42.502 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:42.502 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:42.502 "hdgst": ${hdgst:-false}, 00:35:42.502 "ddgst": ${ddgst:-false} 00:35:42.502 }, 00:35:42.502 "method": "bdev_nvme_attach_controller" 00:35:42.502 } 00:35:42.502 EOF 00:35:42.502 )") 00:35:42.502 08:32:15 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:35:42.502 08:32:15 -- nvmf/common.sh@542 -- # cat 00:35:42.502 08:32:15 -- nvmf/common.sh@542 -- # cat 00:35:42.502 08:32:15 -- nvmf/common.sh@542 -- # cat 00:35:42.502 08:32:15 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:35:42.502 08:32:15 -- nvmf/common.sh@520 -- # config=() 00:35:42.502 08:32:15 -- nvmf/common.sh@520 -- # local subsystem config 00:35:42.502 08:32:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:35:42.502 08:32:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:35:42.502 { 00:35:42.502 "params": { 00:35:42.502 "name": "Nvme$subsystem", 00:35:42.502 "trtype": "$TEST_TRANSPORT", 00:35:42.502 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:42.502 "adrfam": "ipv4", 00:35:42.502 "trsvcid": "$NVMF_PORT", 00:35:42.502 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:42.502 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:42.502 "hdgst": ${hdgst:-false}, 00:35:42.502 "ddgst": ${ddgst:-false} 00:35:42.502 }, 00:35:42.502 "method": "bdev_nvme_attach_controller" 00:35:42.502 } 00:35:42.502 EOF 00:35:42.502 )") 00:35:42.502 08:32:15 -- nvmf/common.sh@544 -- # jq . 00:35:42.502 08:32:15 -- nvmf/common.sh@542 -- # cat 00:35:42.502 08:32:15 -- nvmf/common.sh@545 -- # IFS=, 00:35:42.502 08:32:15 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:35:42.502 "params": { 00:35:42.502 "name": "Nvme1", 00:35:42.502 "trtype": "tcp", 00:35:42.502 "traddr": "10.0.0.2", 00:35:42.502 "adrfam": "ipv4", 00:35:42.502 "trsvcid": "4420", 00:35:42.502 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:42.502 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:42.502 "hdgst": false, 00:35:42.502 "ddgst": false 00:35:42.502 }, 00:35:42.502 "method": "bdev_nvme_attach_controller" 00:35:42.502 }' 00:35:42.502 08:32:15 -- nvmf/common.sh@544 -- # jq . 00:35:42.502 08:32:15 -- nvmf/common.sh@544 -- # jq . 00:35:42.502 08:32:15 -- nvmf/common.sh@545 -- # IFS=, 00:35:42.503 08:32:15 -- nvmf/common.sh@544 -- # jq . 00:35:42.503 08:32:15 -- nvmf/common.sh@545 -- # IFS=, 00:35:42.503 08:32:15 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:35:42.503 "params": { 00:35:42.503 "name": "Nvme1", 00:35:42.503 "trtype": "tcp", 00:35:42.503 "traddr": "10.0.0.2", 00:35:42.503 "adrfam": "ipv4", 00:35:42.503 "trsvcid": "4420", 00:35:42.503 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:42.503 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:42.503 "hdgst": false, 00:35:42.503 "ddgst": false 00:35:42.503 }, 00:35:42.503 "method": "bdev_nvme_attach_controller" 00:35:42.503 }' 00:35:42.503 08:32:15 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:35:42.503 "params": { 00:35:42.503 "name": "Nvme1", 00:35:42.503 "trtype": "tcp", 00:35:42.503 "traddr": "10.0.0.2", 00:35:42.503 "adrfam": "ipv4", 00:35:42.503 "trsvcid": "4420", 00:35:42.503 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:42.503 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:42.503 "hdgst": false, 00:35:42.503 "ddgst": false 00:35:42.503 }, 00:35:42.503 "method": "bdev_nvme_attach_controller" 00:35:42.503 }' 00:35:42.503 08:32:15 -- nvmf/common.sh@545 -- # IFS=, 00:35:42.503 08:32:15 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:35:42.503 "params": { 00:35:42.503 "name": "Nvme1", 00:35:42.503 "trtype": "tcp", 00:35:42.503 "traddr": "10.0.0.2", 00:35:42.503 "adrfam": "ipv4", 00:35:42.503 "trsvcid": "4420", 00:35:42.503 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:42.503 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:42.503 "hdgst": false, 00:35:42.503 "ddgst": false 00:35:42.503 }, 00:35:42.503 "method": "bdev_nvme_attach_controller" 00:35:42.503 }' 00:35:42.503 08:32:15 -- target/bdev_io_wait.sh@37 -- # wait 72652 00:35:42.503 [2024-04-17 08:32:15.679648] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:35:42.503 [2024-04-17 08:32:15.679714] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:35:42.503 [2024-04-17 08:32:15.685474] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:35:42.503 [2024-04-17 08:32:15.685474] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:35:42.503 [2024-04-17 08:32:15.685545] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:35:42.503 [2024-04-17 08:32:15.685653] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:35:42.503 [2024-04-17 08:32:15.690336] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:35:42.503 [2024-04-17 08:32:15.690422] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:35:42.761 [2024-04-17 08:32:15.890961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:42.761 [2024-04-17 08:32:15.908048] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:42.761 [2024-04-17 08:32:15.961720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:42.761 [2024-04-17 08:32:15.991213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:35:42.761 [2024-04-17 08:32:15.997257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:35:42.761 [2024-04-17 08:32:16.037116] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:42.761 [2024-04-17 08:32:16.064810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:35:43.019 [2024-04-17 08:32:16.121817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:35:43.019 Running I/O for 1 seconds... 00:35:43.019 Running I/O for 1 seconds... 00:35:43.019 Running I/O for 1 seconds... 00:35:43.019 Running I/O for 1 seconds... 00:35:43.956 00:35:43.956 Latency(us) 00:35:43.956 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:43.956 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:35:43.956 Nvme1n1 : 1.02 6479.61 25.31 0.00 0.00 19592.53 1860.19 27588.08 00:35:43.956 =================================================================================================================== 00:35:43.956 Total : 6479.61 25.31 0.00 0.00 19592.53 1860.19 27588.08 00:35:43.956 00:35:43.956 Latency(us) 00:35:43.956 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:43.956 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:35:43.956 Nvme1n1 : 1.01 6184.73 24.16 0.00 0.00 20627.33 6181.56 41439.36 00:35:43.956 =================================================================================================================== 00:35:43.956 Total : 6184.73 24.16 0.00 0.00 20627.33 6181.56 41439.36 00:35:43.956 00:35:43.956 Latency(us) 00:35:43.956 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:43.956 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:35:43.956 Nvme1n1 : 1.01 10263.36 40.09 0.00 0.00 12428.71 6124.32 23238.09 00:35:43.956 =================================================================================================================== 00:35:43.956 Total : 10263.36 40.09 0.00 0.00 12428.71 6124.32 23238.09 00:35:43.956 00:35:43.957 Latency(us) 00:35:43.957 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:43.957 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:35:43.957 Nvme1n1 : 1.00 210184.52 821.03 0.00 0.00 606.58 248.62 1130.42 00:35:43.957 =================================================================================================================== 00:35:43.957 Total : 210184.52 821.03 0.00 0.00 606.58 248.62 1130.42 00:35:44.214 08:32:17 -- target/bdev_io_wait.sh@38 -- # wait 72654 00:35:44.214 08:32:17 -- target/bdev_io_wait.sh@39 -- # wait 72657 00:35:44.214 08:32:17 -- target/bdev_io_wait.sh@40 -- # wait 72663 00:35:44.214 08:32:17 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:44.214 08:32:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:44.214 08:32:17 -- common/autotest_common.sh@10 -- # set +x 00:35:44.214 08:32:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:44.214 08:32:17 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:35:44.214 08:32:17 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:35:44.214 08:32:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:35:44.214 08:32:17 -- nvmf/common.sh@116 -- # sync 00:35:44.510 08:32:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:35:44.510 08:32:17 -- nvmf/common.sh@119 -- # set +e 00:35:44.510 08:32:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:35:44.510 08:32:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:35:44.510 rmmod nvme_tcp 00:35:44.510 rmmod nvme_fabrics 00:35:44.510 rmmod nvme_keyring 00:35:44.510 08:32:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:35:44.510 08:32:17 -- nvmf/common.sh@123 -- # set -e 00:35:44.510 08:32:17 -- nvmf/common.sh@124 -- # return 0 00:35:44.510 08:32:17 -- nvmf/common.sh@477 -- # '[' -n 72598 ']' 00:35:44.510 08:32:17 -- nvmf/common.sh@478 -- # killprocess 72598 00:35:44.510 08:32:17 -- common/autotest_common.sh@926 -- # '[' -z 72598 ']' 00:35:44.510 08:32:17 -- common/autotest_common.sh@930 -- # kill -0 72598 00:35:44.510 08:32:17 -- common/autotest_common.sh@931 -- # uname 00:35:44.510 08:32:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:44.510 08:32:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72598 00:35:44.510 08:32:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:35:44.510 08:32:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:35:44.510 08:32:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72598' 00:35:44.510 killing process with pid 72598 00:35:44.510 08:32:17 -- common/autotest_common.sh@945 -- # kill 72598 00:35:44.510 08:32:17 -- common/autotest_common.sh@950 -- # wait 72598 00:35:44.769 08:32:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:35:44.769 08:32:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:35:44.769 08:32:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:35:44.769 08:32:17 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:44.769 08:32:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:35:44.769 08:32:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:44.769 08:32:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:44.769 08:32:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:44.769 08:32:17 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:35:44.769 00:35:44.769 real 0m4.015s 00:35:44.769 user 0m17.825s 00:35:44.769 sys 0m1.657s 00:35:44.769 08:32:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:44.769 08:32:17 -- common/autotest_common.sh@10 -- # set +x 00:35:44.769 ************************************ 00:35:44.769 END TEST nvmf_bdev_io_wait 00:35:44.769 ************************************ 00:35:44.769 08:32:17 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:35:44.769 08:32:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:35:44.769 08:32:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:44.769 08:32:17 -- common/autotest_common.sh@10 -- # set +x 00:35:44.769 ************************************ 00:35:44.769 START TEST nvmf_queue_depth 00:35:44.769 ************************************ 00:35:44.769 08:32:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:35:44.769 * Looking for test storage... 00:35:45.028 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:35:45.028 08:32:18 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:35:45.028 08:32:18 -- nvmf/common.sh@7 -- # uname -s 00:35:45.028 08:32:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:45.028 08:32:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:45.028 08:32:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:45.028 08:32:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:45.028 08:32:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:45.028 08:32:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:45.028 08:32:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:45.028 08:32:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:45.028 08:32:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:45.028 08:32:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:45.028 08:32:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:35:45.028 08:32:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:35:45.028 08:32:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:45.029 08:32:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:45.029 08:32:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:35:45.029 08:32:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:45.029 08:32:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:45.029 08:32:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:45.029 08:32:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:45.029 08:32:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.029 08:32:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.029 08:32:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.029 08:32:18 -- paths/export.sh@5 -- # export PATH 00:35:45.029 08:32:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.029 08:32:18 -- nvmf/common.sh@46 -- # : 0 00:35:45.029 08:32:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:35:45.029 08:32:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:35:45.029 08:32:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:35:45.029 08:32:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:45.029 08:32:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:45.029 08:32:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:35:45.029 08:32:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:35:45.029 08:32:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:35:45.029 08:32:18 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:35:45.029 08:32:18 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:35:45.029 08:32:18 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:45.029 08:32:18 -- target/queue_depth.sh@19 -- # nvmftestinit 00:35:45.029 08:32:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:35:45.029 08:32:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:45.029 08:32:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:35:45.029 08:32:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:35:45.029 08:32:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:35:45.029 08:32:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:45.029 08:32:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:45.029 08:32:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:45.029 08:32:18 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:35:45.029 08:32:18 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:35:45.029 08:32:18 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:35:45.029 08:32:18 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:35:45.029 08:32:18 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:35:45.029 08:32:18 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:35:45.029 08:32:18 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:45.029 08:32:18 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:45.029 08:32:18 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:35:45.029 08:32:18 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:35:45.029 08:32:18 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:35:45.029 08:32:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:35:45.029 08:32:18 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:35:45.029 08:32:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:45.029 08:32:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:35:45.029 08:32:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:35:45.029 08:32:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:35:45.029 08:32:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:35:45.029 08:32:18 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:35:45.029 08:32:18 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:35:45.029 Cannot find device "nvmf_tgt_br" 00:35:45.029 08:32:18 -- nvmf/common.sh@154 -- # true 00:35:45.029 08:32:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:35:45.029 Cannot find device "nvmf_tgt_br2" 00:35:45.029 08:32:18 -- nvmf/common.sh@155 -- # true 00:35:45.029 08:32:18 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:35:45.029 08:32:18 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:35:45.029 Cannot find device "nvmf_tgt_br" 00:35:45.029 08:32:18 -- nvmf/common.sh@157 -- # true 00:35:45.029 08:32:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:35:45.029 Cannot find device "nvmf_tgt_br2" 00:35:45.029 08:32:18 -- nvmf/common.sh@158 -- # true 00:35:45.029 08:32:18 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:35:45.029 08:32:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:35:45.029 08:32:18 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:45.029 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:45.029 08:32:18 -- nvmf/common.sh@161 -- # true 00:35:45.029 08:32:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:45.029 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:45.029 08:32:18 -- nvmf/common.sh@162 -- # true 00:35:45.029 08:32:18 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:35:45.029 08:32:18 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:35:45.029 08:32:18 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:35:45.029 08:32:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:35:45.029 08:32:18 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:35:45.288 08:32:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:35:45.288 08:32:18 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:35:45.288 08:32:18 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:35:45.288 08:32:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:35:45.288 08:32:18 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:35:45.288 08:32:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:35:45.288 08:32:18 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:35:45.288 08:32:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:35:45.288 08:32:18 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:35:45.288 08:32:18 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:35:45.288 08:32:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:35:45.288 08:32:18 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:35:45.288 08:32:18 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:35:45.288 08:32:18 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:35:45.288 08:32:18 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:35:45.288 08:32:18 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:35:45.288 08:32:18 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:35:45.288 08:32:18 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:35:45.288 08:32:18 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:35:45.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:45.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:35:45.288 00:35:45.288 --- 10.0.0.2 ping statistics --- 00:35:45.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:45.288 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:35:45.288 08:32:18 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:35:45.288 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:35:45.288 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:35:45.288 00:35:45.288 --- 10.0.0.3 ping statistics --- 00:35:45.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:45.288 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:35:45.289 08:32:18 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:35:45.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:45.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:35:45.289 00:35:45.289 --- 10.0.0.1 ping statistics --- 00:35:45.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:45.289 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:35:45.289 08:32:18 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:45.289 08:32:18 -- nvmf/common.sh@421 -- # return 0 00:35:45.289 08:32:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:35:45.289 08:32:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:45.289 08:32:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:35:45.289 08:32:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:35:45.289 08:32:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:45.289 08:32:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:35:45.289 08:32:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:35:45.289 08:32:18 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:35:45.289 08:32:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:35:45.289 08:32:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:35:45.289 08:32:18 -- common/autotest_common.sh@10 -- # set +x 00:35:45.289 08:32:18 -- nvmf/common.sh@469 -- # nvmfpid=72881 00:35:45.289 08:32:18 -- nvmf/common.sh@470 -- # waitforlisten 72881 00:35:45.289 08:32:18 -- common/autotest_common.sh@819 -- # '[' -z 72881 ']' 00:35:45.289 08:32:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:45.289 08:32:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:35:45.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:45.289 08:32:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:45.289 08:32:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:35:45.289 08:32:18 -- common/autotest_common.sh@10 -- # set +x 00:35:45.289 08:32:18 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:45.289 [2024-04-17 08:32:18.591364] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:35:45.289 [2024-04-17 08:32:18.591463] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:45.547 [2024-04-17 08:32:18.729894] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:45.547 [2024-04-17 08:32:18.835487] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:35:45.547 [2024-04-17 08:32:18.835626] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:45.547 [2024-04-17 08:32:18.835634] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:45.547 [2024-04-17 08:32:18.835640] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:45.547 [2024-04-17 08:32:18.835667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:46.484 08:32:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:35:46.484 08:32:19 -- common/autotest_common.sh@852 -- # return 0 00:35:46.484 08:32:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:35:46.484 08:32:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:46.484 08:32:19 -- common/autotest_common.sh@10 -- # set +x 00:35:46.484 08:32:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:46.484 08:32:19 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:46.484 08:32:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:46.484 08:32:19 -- common/autotest_common.sh@10 -- # set +x 00:35:46.484 [2024-04-17 08:32:19.561055] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:46.484 08:32:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:46.484 08:32:19 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:46.484 08:32:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:46.484 08:32:19 -- common/autotest_common.sh@10 -- # set +x 00:35:46.484 Malloc0 00:35:46.484 08:32:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:46.484 08:32:19 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:46.484 08:32:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:46.484 08:32:19 -- common/autotest_common.sh@10 -- # set +x 00:35:46.484 08:32:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:46.484 08:32:19 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:46.484 08:32:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:46.484 08:32:19 -- common/autotest_common.sh@10 -- # set +x 00:35:46.484 08:32:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:46.484 08:32:19 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:46.484 08:32:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:46.484 08:32:19 -- common/autotest_common.sh@10 -- # set +x 00:35:46.484 [2024-04-17 08:32:19.635117] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:46.484 08:32:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:46.484 08:32:19 -- target/queue_depth.sh@30 -- # bdevperf_pid=72937 00:35:46.485 08:32:19 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:35:46.485 08:32:19 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:46.485 08:32:19 -- target/queue_depth.sh@33 -- # waitforlisten 72937 /var/tmp/bdevperf.sock 00:35:46.485 08:32:19 -- common/autotest_common.sh@819 -- # '[' -z 72937 ']' 00:35:46.485 08:32:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:46.485 08:32:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:35:46.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:46.485 08:32:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:46.485 08:32:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:35:46.485 08:32:19 -- common/autotest_common.sh@10 -- # set +x 00:35:46.485 [2024-04-17 08:32:19.692072] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:35:46.485 [2024-04-17 08:32:19.692144] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72937 ] 00:35:46.743 [2024-04-17 08:32:19.831022] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:46.743 [2024-04-17 08:32:19.935803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:47.310 08:32:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:35:47.310 08:32:20 -- common/autotest_common.sh@852 -- # return 0 00:35:47.310 08:32:20 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:47.310 08:32:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:47.310 08:32:20 -- common/autotest_common.sh@10 -- # set +x 00:35:47.568 NVMe0n1 00:35:47.568 08:32:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:47.568 08:32:20 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:47.568 Running I/O for 10 seconds... 00:35:57.542 00:35:57.542 Latency(us) 00:35:57.542 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:57.542 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:35:57.542 Verification LBA range: start 0x0 length 0x4000 00:35:57.542 NVMe0n1 : 10.06 15491.16 60.51 0.00 0.00 65873.27 13851.28 51055.12 00:35:57.542 =================================================================================================================== 00:35:57.542 Total : 15491.16 60.51 0.00 0.00 65873.27 13851.28 51055.12 00:35:57.542 0 00:35:57.542 08:32:30 -- target/queue_depth.sh@39 -- # killprocess 72937 00:35:57.542 08:32:30 -- common/autotest_common.sh@926 -- # '[' -z 72937 ']' 00:35:57.542 08:32:30 -- common/autotest_common.sh@930 -- # kill -0 72937 00:35:57.542 08:32:30 -- common/autotest_common.sh@931 -- # uname 00:35:57.542 08:32:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:57.542 08:32:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72937 00:35:57.802 08:32:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:35:57.802 08:32:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:35:57.802 killing process with pid 72937 00:35:57.802 08:32:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72937' 00:35:57.802 08:32:30 -- common/autotest_common.sh@945 -- # kill 72937 00:35:57.802 Received shutdown signal, test time was about 10.000000 seconds 00:35:57.802 00:35:57.802 Latency(us) 00:35:57.802 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:57.802 =================================================================================================================== 00:35:57.802 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:57.802 08:32:30 -- common/autotest_common.sh@950 -- # wait 72937 00:35:57.802 08:32:31 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:35:57.802 08:32:31 -- target/queue_depth.sh@43 -- # nvmftestfini 00:35:57.802 08:32:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:35:57.802 08:32:31 -- nvmf/common.sh@116 -- # sync 00:35:58.062 08:32:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:35:58.062 08:32:31 -- nvmf/common.sh@119 -- # set +e 00:35:58.062 08:32:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:35:58.062 08:32:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:35:58.062 rmmod nvme_tcp 00:35:58.062 rmmod nvme_fabrics 00:35:58.062 rmmod nvme_keyring 00:35:58.062 08:32:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:35:58.062 08:32:31 -- nvmf/common.sh@123 -- # set -e 00:35:58.062 08:32:31 -- nvmf/common.sh@124 -- # return 0 00:35:58.062 08:32:31 -- nvmf/common.sh@477 -- # '[' -n 72881 ']' 00:35:58.062 08:32:31 -- nvmf/common.sh@478 -- # killprocess 72881 00:35:58.062 08:32:31 -- common/autotest_common.sh@926 -- # '[' -z 72881 ']' 00:35:58.062 08:32:31 -- common/autotest_common.sh@930 -- # kill -0 72881 00:35:58.062 08:32:31 -- common/autotest_common.sh@931 -- # uname 00:35:58.062 08:32:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:58.062 08:32:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72881 00:35:58.062 08:32:31 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:35:58.062 08:32:31 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:35:58.062 killing process with pid 72881 00:35:58.063 08:32:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72881' 00:35:58.063 08:32:31 -- common/autotest_common.sh@945 -- # kill 72881 00:35:58.063 08:32:31 -- common/autotest_common.sh@950 -- # wait 72881 00:35:58.322 08:32:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:35:58.322 08:32:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:35:58.322 08:32:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:35:58.322 08:32:31 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:58.322 08:32:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:35:58.322 08:32:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:58.322 08:32:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:58.322 08:32:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:58.322 08:32:31 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:35:58.322 00:35:58.322 real 0m13.577s 00:35:58.322 user 0m23.268s 00:35:58.322 sys 0m2.034s 00:35:58.322 08:32:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:58.322 08:32:31 -- common/autotest_common.sh@10 -- # set +x 00:35:58.322 ************************************ 00:35:58.322 END TEST nvmf_queue_depth 00:35:58.322 ************************************ 00:35:58.322 08:32:31 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:35:58.322 08:32:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:35:58.322 08:32:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:58.322 08:32:31 -- common/autotest_common.sh@10 -- # set +x 00:35:58.322 ************************************ 00:35:58.322 START TEST nvmf_multipath 00:35:58.322 ************************************ 00:35:58.322 08:32:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:35:58.581 * Looking for test storage... 00:35:58.581 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:35:58.581 08:32:31 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:35:58.581 08:32:31 -- nvmf/common.sh@7 -- # uname -s 00:35:58.581 08:32:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:58.581 08:32:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:58.581 08:32:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:58.581 08:32:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:58.581 08:32:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:58.581 08:32:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:58.581 08:32:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:58.581 08:32:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:58.581 08:32:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:58.581 08:32:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:58.581 08:32:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:35:58.581 08:32:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:35:58.581 08:32:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:58.581 08:32:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:58.581 08:32:31 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:35:58.581 08:32:31 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:58.581 08:32:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:58.581 08:32:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:58.581 08:32:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:58.581 08:32:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:58.581 08:32:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:58.581 08:32:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:58.581 08:32:31 -- paths/export.sh@5 -- # export PATH 00:35:58.581 08:32:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:58.581 08:32:31 -- nvmf/common.sh@46 -- # : 0 00:35:58.581 08:32:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:35:58.581 08:32:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:35:58.581 08:32:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:35:58.581 08:32:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:58.581 08:32:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:58.581 08:32:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:35:58.581 08:32:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:35:58.581 08:32:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:35:58.581 08:32:31 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:58.581 08:32:31 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:58.581 08:32:31 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:35:58.581 08:32:31 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:58.581 08:32:31 -- target/multipath.sh@43 -- # nvmftestinit 00:35:58.581 08:32:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:35:58.581 08:32:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:58.581 08:32:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:35:58.581 08:32:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:35:58.582 08:32:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:35:58.582 08:32:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:58.582 08:32:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:58.582 08:32:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:58.582 08:32:31 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:35:58.582 08:32:31 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:35:58.582 08:32:31 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:35:58.582 08:32:31 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:35:58.582 08:32:31 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:35:58.582 08:32:31 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:35:58.582 08:32:31 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:58.582 08:32:31 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:58.582 08:32:31 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:35:58.582 08:32:31 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:35:58.582 08:32:31 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:35:58.582 08:32:31 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:35:58.582 08:32:31 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:35:58.582 08:32:31 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:58.582 08:32:31 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:35:58.582 08:32:31 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:35:58.582 08:32:31 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:35:58.582 08:32:31 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:35:58.582 08:32:31 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:35:58.582 08:32:31 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:35:58.582 Cannot find device "nvmf_tgt_br" 00:35:58.582 08:32:31 -- nvmf/common.sh@154 -- # true 00:35:58.582 08:32:31 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:35:58.582 Cannot find device "nvmf_tgt_br2" 00:35:58.582 08:32:31 -- nvmf/common.sh@155 -- # true 00:35:58.582 08:32:31 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:35:58.582 08:32:31 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:35:58.582 Cannot find device "nvmf_tgt_br" 00:35:58.582 08:32:31 -- nvmf/common.sh@157 -- # true 00:35:58.582 08:32:31 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:35:58.582 Cannot find device "nvmf_tgt_br2" 00:35:58.582 08:32:31 -- nvmf/common.sh@158 -- # true 00:35:58.582 08:32:31 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:35:58.582 08:32:31 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:35:58.841 08:32:31 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:58.841 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:58.841 08:32:31 -- nvmf/common.sh@161 -- # true 00:35:58.841 08:32:31 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:58.841 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:58.841 08:32:31 -- nvmf/common.sh@162 -- # true 00:35:58.841 08:32:31 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:35:58.841 08:32:31 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:35:58.841 08:32:31 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:35:58.841 08:32:31 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:35:58.841 08:32:31 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:35:58.841 08:32:31 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:35:58.841 08:32:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:35:58.841 08:32:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:35:58.841 08:32:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:35:58.841 08:32:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:35:58.841 08:32:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:35:58.841 08:32:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:35:58.841 08:32:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:35:58.841 08:32:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:35:58.841 08:32:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:35:58.841 08:32:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:35:58.841 08:32:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:35:58.841 08:32:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:35:58.841 08:32:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:35:58.841 08:32:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:35:58.841 08:32:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:35:58.841 08:32:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:35:58.841 08:32:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:35:58.841 08:32:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:35:58.841 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:58.841 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:35:58.841 00:35:58.841 --- 10.0.0.2 ping statistics --- 00:35:58.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:58.841 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:35:58.841 08:32:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:35:58.841 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:35:58.841 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:35:58.841 00:35:58.841 --- 10.0.0.3 ping statistics --- 00:35:58.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:58.841 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:35:58.841 08:32:32 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:35:58.841 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:58.841 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:35:58.841 00:35:58.841 --- 10.0.0.1 ping statistics --- 00:35:58.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:58.841 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:35:58.841 08:32:32 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:58.841 08:32:32 -- nvmf/common.sh@421 -- # return 0 00:35:58.841 08:32:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:35:58.841 08:32:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:58.841 08:32:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:35:58.841 08:32:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:35:58.841 08:32:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:58.841 08:32:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:35:58.841 08:32:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:35:58.841 08:32:32 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:35:58.841 08:32:32 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:35:58.841 08:32:32 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:35:58.841 08:32:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:35:58.841 08:32:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:35:58.841 08:32:32 -- common/autotest_common.sh@10 -- # set +x 00:35:58.841 08:32:32 -- nvmf/common.sh@469 -- # nvmfpid=73275 00:35:58.841 08:32:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:35:58.841 08:32:32 -- nvmf/common.sh@470 -- # waitforlisten 73275 00:35:58.841 08:32:32 -- common/autotest_common.sh@819 -- # '[' -z 73275 ']' 00:35:58.841 08:32:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:58.841 08:32:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:35:58.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:58.841 08:32:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:58.841 08:32:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:35:58.841 08:32:32 -- common/autotest_common.sh@10 -- # set +x 00:35:59.100 [2024-04-17 08:32:32.211826] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:35:59.100 [2024-04-17 08:32:32.211894] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:59.100 [2024-04-17 08:32:32.345180] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:59.358 [2024-04-17 08:32:32.449509] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:35:59.358 [2024-04-17 08:32:32.449653] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:59.358 [2024-04-17 08:32:32.449661] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:59.358 [2024-04-17 08:32:32.449668] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:59.358 [2024-04-17 08:32:32.449853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:59.358 [2024-04-17 08:32:32.449996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:59.358 [2024-04-17 08:32:32.453423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:35:59.358 [2024-04-17 08:32:32.453429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:59.926 08:32:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:35:59.926 08:32:33 -- common/autotest_common.sh@852 -- # return 0 00:35:59.926 08:32:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:35:59.926 08:32:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:59.926 08:32:33 -- common/autotest_common.sh@10 -- # set +x 00:35:59.926 08:32:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:59.926 08:32:33 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:00.184 [2024-04-17 08:32:33.328391] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:00.184 08:32:33 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:36:00.502 Malloc0 00:36:00.502 08:32:33 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:36:00.760 08:32:33 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:01.018 08:32:34 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:01.018 [2024-04-17 08:32:34.303450] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:01.018 08:32:34 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:36:01.276 [2024-04-17 08:32:34.511269] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:36:01.276 08:32:34 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:36:01.534 08:32:34 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:36:01.791 08:32:34 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:36:01.791 08:32:34 -- common/autotest_common.sh@1177 -- # local i=0 00:36:01.791 08:32:34 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:36:01.791 08:32:34 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:36:01.791 08:32:34 -- common/autotest_common.sh@1184 -- # sleep 2 00:36:03.687 08:32:36 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:36:03.687 08:32:36 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:36:03.687 08:32:36 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:36:03.687 08:32:36 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:36:03.687 08:32:36 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:36:03.687 08:32:36 -- common/autotest_common.sh@1187 -- # return 0 00:36:03.687 08:32:36 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:36:03.687 08:32:36 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:36:03.687 08:32:36 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:36:03.687 08:32:36 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:36:03.687 08:32:36 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:36:03.687 08:32:36 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:36:03.687 08:32:36 -- target/multipath.sh@38 -- # return 0 00:36:03.687 08:32:36 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:36:03.687 08:32:36 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:36:03.687 08:32:36 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:36:03.687 08:32:36 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:36:03.687 08:32:36 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:36:03.687 08:32:36 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:36:03.687 08:32:36 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:36:03.687 08:32:36 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:36:03.687 08:32:36 -- target/multipath.sh@22 -- # local timeout=20 00:36:03.687 08:32:36 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:36:03.687 08:32:36 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:36:03.687 08:32:36 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:36:03.687 08:32:36 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:36:03.687 08:32:36 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:36:03.687 08:32:36 -- target/multipath.sh@22 -- # local timeout=20 00:36:03.687 08:32:36 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:36:03.687 08:32:36 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:36:03.687 08:32:36 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:36:03.687 08:32:36 -- target/multipath.sh@85 -- # echo numa 00:36:03.687 08:32:37 -- target/multipath.sh@88 -- # fio_pid=73412 00:36:03.687 08:32:37 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:36:03.687 08:32:37 -- target/multipath.sh@90 -- # sleep 1 00:36:03.946 [global] 00:36:03.946 thread=1 00:36:03.946 invalidate=1 00:36:03.946 rw=randrw 00:36:03.946 time_based=1 00:36:03.946 runtime=6 00:36:03.946 ioengine=libaio 00:36:03.946 direct=1 00:36:03.946 bs=4096 00:36:03.946 iodepth=128 00:36:03.946 norandommap=0 00:36:03.946 numjobs=1 00:36:03.946 00:36:03.946 verify_dump=1 00:36:03.946 verify_backlog=512 00:36:03.946 verify_state_save=0 00:36:03.946 do_verify=1 00:36:03.946 verify=crc32c-intel 00:36:03.946 [job0] 00:36:03.946 filename=/dev/nvme0n1 00:36:03.946 Could not set queue depth (nvme0n1) 00:36:03.946 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:03.946 fio-3.35 00:36:03.946 Starting 1 thread 00:36:04.881 08:32:38 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:36:05.140 08:32:38 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:36:05.400 08:32:38 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:36:05.400 08:32:38 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:36:05.400 08:32:38 -- target/multipath.sh@22 -- # local timeout=20 00:36:05.400 08:32:38 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:36:05.400 08:32:38 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:36:05.400 08:32:38 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:36:05.400 08:32:38 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:36:05.400 08:32:38 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:36:05.400 08:32:38 -- target/multipath.sh@22 -- # local timeout=20 00:36:05.400 08:32:38 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:36:05.400 08:32:38 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:36:05.400 08:32:38 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:36:05.400 08:32:38 -- target/multipath.sh@25 -- # sleep 1s 00:36:06.338 08:32:39 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:36:06.338 08:32:39 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:36:06.338 08:32:39 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:36:06.338 08:32:39 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:06.598 08:32:39 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:36:06.877 08:32:39 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:36:06.877 08:32:39 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:36:06.877 08:32:39 -- target/multipath.sh@22 -- # local timeout=20 00:36:06.878 08:32:39 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:36:06.878 08:32:39 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:36:06.878 08:32:39 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:36:06.878 08:32:39 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:36:06.878 08:32:39 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:36:06.878 08:32:39 -- target/multipath.sh@22 -- # local timeout=20 00:36:06.878 08:32:39 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:36:06.878 08:32:39 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:36:06.878 08:32:39 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:36:06.878 08:32:39 -- target/multipath.sh@25 -- # sleep 1s 00:36:07.816 08:32:40 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:36:07.816 08:32:40 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:36:07.816 08:32:40 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:36:07.816 08:32:40 -- target/multipath.sh@104 -- # wait 73412 00:36:10.358 00:36:10.358 job0: (groupid=0, jobs=1): err= 0: pid=73440: Wed Apr 17 08:32:43 2024 00:36:10.358 read: IOPS=12.6k, BW=49.3MiB/s (51.7MB/s)(296MiB/6005msec) 00:36:10.358 slat (usec): min=4, max=4557, avg=43.87, stdev=185.08 00:36:10.358 clat (usec): min=458, max=18114, avg=7005.08, stdev=1253.75 00:36:10.358 lat (usec): min=486, max=18126, avg=7048.95, stdev=1258.94 00:36:10.358 clat percentiles (usec): 00:36:10.358 | 1.00th=[ 3949], 5.00th=[ 5211], 10.00th=[ 5669], 20.00th=[ 6194], 00:36:10.358 | 30.00th=[ 6456], 40.00th=[ 6652], 50.00th=[ 6980], 60.00th=[ 7242], 00:36:10.358 | 70.00th=[ 7439], 80.00th=[ 7767], 90.00th=[ 8291], 95.00th=[ 9241], 00:36:10.358 | 99.00th=[10683], 99.50th=[11076], 99.90th=[14222], 99.95th=[16712], 00:36:10.358 | 99.99th=[17957] 00:36:10.358 bw ( KiB/s): min=11808, max=33928, per=52.52%, avg=26493.09, stdev=6558.20, samples=11 00:36:10.358 iops : min= 2952, max= 8482, avg=6623.27, stdev=1639.55, samples=11 00:36:10.358 write: IOPS=7265, BW=28.4MiB/s (29.8MB/s)(149MiB/5245msec); 0 zone resets 00:36:10.358 slat (usec): min=9, max=1744, avg=58.75, stdev=116.61 00:36:10.358 clat (usec): min=238, max=16801, avg=6039.34, stdev=1170.26 00:36:10.358 lat (usec): min=329, max=16828, avg=6098.09, stdev=1173.38 00:36:10.358 clat percentiles (usec): 00:36:10.358 | 1.00th=[ 2868], 5.00th=[ 4146], 10.00th=[ 4752], 20.00th=[ 5342], 00:36:10.358 | 30.00th=[ 5669], 40.00th=[ 5866], 50.00th=[ 6063], 60.00th=[ 6259], 00:36:10.358 | 70.00th=[ 6456], 80.00th=[ 6718], 90.00th=[ 7046], 95.00th=[ 7570], 00:36:10.359 | 99.00th=[ 9765], 99.50th=[10814], 99.90th=[12649], 99.95th=[15008], 00:36:10.359 | 99.99th=[16712] 00:36:10.359 bw ( KiB/s): min=12288, max=33240, per=90.93%, avg=26426.18, stdev=6157.67, samples=11 00:36:10.359 iops : min= 3072, max= 8310, avg=6606.55, stdev=1539.42, samples=11 00:36:10.359 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.03% 00:36:10.359 lat (msec) : 2=0.19%, 4=1.85%, 10=95.90%, 20=1.99% 00:36:10.359 cpu : usr=6.23%, sys=30.09%, ctx=8717, majf=0, minf=96 00:36:10.359 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:36:10.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.359 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:10.359 issued rwts: total=75733,38108,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.359 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:10.359 00:36:10.359 Run status group 0 (all jobs): 00:36:10.359 READ: bw=49.3MiB/s (51.7MB/s), 49.3MiB/s-49.3MiB/s (51.7MB/s-51.7MB/s), io=296MiB (310MB), run=6005-6005msec 00:36:10.359 WRITE: bw=28.4MiB/s (29.8MB/s), 28.4MiB/s-28.4MiB/s (29.8MB/s-29.8MB/s), io=149MiB (156MB), run=5245-5245msec 00:36:10.359 00:36:10.359 Disk stats (read/write): 00:36:10.359 nvme0n1: ios=74419/37607, merge=0/0, ticks=469951/201254, in_queue=671205, util=98.68% 00:36:10.359 08:32:43 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:36:10.359 08:32:43 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:36:10.619 08:32:43 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:36:10.619 08:32:43 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:36:10.619 08:32:43 -- target/multipath.sh@22 -- # local timeout=20 00:36:10.619 08:32:43 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:36:10.620 08:32:43 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:36:10.620 08:32:43 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:36:10.620 08:32:43 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:36:10.620 08:32:43 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:36:10.620 08:32:43 -- target/multipath.sh@22 -- # local timeout=20 00:36:10.620 08:32:43 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:36:10.620 08:32:43 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:36:10.620 08:32:43 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:36:10.620 08:32:43 -- target/multipath.sh@25 -- # sleep 1s 00:36:11.558 08:32:44 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:36:11.558 08:32:44 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:36:11.558 08:32:44 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:36:11.558 08:32:44 -- target/multipath.sh@113 -- # echo round-robin 00:36:11.558 08:32:44 -- target/multipath.sh@116 -- # fio_pid=73565 00:36:11.558 08:32:44 -- target/multipath.sh@118 -- # sleep 1 00:36:11.558 08:32:44 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:36:11.558 [global] 00:36:11.558 thread=1 00:36:11.558 invalidate=1 00:36:11.558 rw=randrw 00:36:11.558 time_based=1 00:36:11.558 runtime=6 00:36:11.558 ioengine=libaio 00:36:11.558 direct=1 00:36:11.558 bs=4096 00:36:11.558 iodepth=128 00:36:11.558 norandommap=0 00:36:11.558 numjobs=1 00:36:11.558 00:36:11.558 verify_dump=1 00:36:11.558 verify_backlog=512 00:36:11.558 verify_state_save=0 00:36:11.558 do_verify=1 00:36:11.558 verify=crc32c-intel 00:36:11.558 [job0] 00:36:11.558 filename=/dev/nvme0n1 00:36:11.558 Could not set queue depth (nvme0n1) 00:36:11.817 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:11.817 fio-3.35 00:36:11.817 Starting 1 thread 00:36:12.756 08:32:45 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:36:12.756 08:32:45 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:36:13.014 08:32:46 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:36:13.014 08:32:46 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:36:13.014 08:32:46 -- target/multipath.sh@22 -- # local timeout=20 00:36:13.014 08:32:46 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:36:13.014 08:32:46 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:36:13.014 08:32:46 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:36:13.014 08:32:46 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:36:13.014 08:32:46 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:36:13.014 08:32:46 -- target/multipath.sh@22 -- # local timeout=20 00:36:13.014 08:32:46 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:36:13.014 08:32:46 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:36:13.014 08:32:46 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:36:13.014 08:32:46 -- target/multipath.sh@25 -- # sleep 1s 00:36:13.949 08:32:47 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:36:13.949 08:32:47 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:36:13.949 08:32:47 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:36:13.949 08:32:47 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:14.208 08:32:47 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:36:14.474 08:32:47 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:36:14.474 08:32:47 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:36:14.474 08:32:47 -- target/multipath.sh@22 -- # local timeout=20 00:36:14.474 08:32:47 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:36:14.474 08:32:47 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:36:14.474 08:32:47 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:36:14.474 08:32:47 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:36:14.474 08:32:47 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:36:14.474 08:32:47 -- target/multipath.sh@22 -- # local timeout=20 00:36:14.474 08:32:47 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:36:14.474 08:32:47 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:36:14.474 08:32:47 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:36:14.474 08:32:47 -- target/multipath.sh@25 -- # sleep 1s 00:36:15.412 08:32:48 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:36:15.412 08:32:48 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:36:15.412 08:32:48 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:36:15.412 08:32:48 -- target/multipath.sh@132 -- # wait 73565 00:36:17.953 00:36:17.953 job0: (groupid=0, jobs=1): err= 0: pid=73586: Wed Apr 17 08:32:51 2024 00:36:17.953 read: IOPS=13.4k, BW=52.5MiB/s (55.1MB/s)(315MiB/6005msec) 00:36:17.953 slat (usec): min=2, max=5376, avg=37.71, stdev=169.31 00:36:17.953 clat (usec): min=260, max=14832, avg=6661.34, stdev=1362.56 00:36:17.953 lat (usec): min=273, max=14839, avg=6699.04, stdev=1372.83 00:36:17.953 clat percentiles (usec): 00:36:17.953 | 1.00th=[ 3064], 5.00th=[ 4359], 10.00th=[ 4948], 20.00th=[ 5735], 00:36:17.953 | 30.00th=[ 6194], 40.00th=[ 6456], 50.00th=[ 6718], 60.00th=[ 6980], 00:36:17.953 | 70.00th=[ 7242], 80.00th=[ 7570], 90.00th=[ 8094], 95.00th=[ 8717], 00:36:17.953 | 99.00th=[10552], 99.50th=[11076], 99.90th=[12518], 99.95th=[13435], 00:36:17.953 | 99.99th=[13960] 00:36:17.953 bw ( KiB/s): min=15480, max=41676, per=53.56%, avg=28815.91, stdev=8565.63, samples=11 00:36:17.953 iops : min= 3870, max=10419, avg=7203.91, stdev=2141.32, samples=11 00:36:17.953 write: IOPS=8304, BW=32.4MiB/s (34.0MB/s)(159MiB/4894msec); 0 zone resets 00:36:17.953 slat (usec): min=4, max=1856, avg=50.40, stdev=102.12 00:36:17.953 clat (usec): min=356, max=13163, avg=5485.01, stdev=1388.36 00:36:17.953 lat (usec): min=462, max=13194, avg=5535.41, stdev=1399.71 00:36:17.953 clat percentiles (usec): 00:36:17.953 | 1.00th=[ 2409], 5.00th=[ 3130], 10.00th=[ 3556], 20.00th=[ 4178], 00:36:17.953 | 30.00th=[ 4817], 40.00th=[ 5407], 50.00th=[ 5735], 60.00th=[ 5997], 00:36:17.953 | 70.00th=[ 6259], 80.00th=[ 6521], 90.00th=[ 6849], 95.00th=[ 7242], 00:36:17.953 | 99.00th=[ 9241], 99.50th=[ 9765], 99.90th=[11338], 99.95th=[11994], 00:36:17.953 | 99.99th=[13042] 00:36:17.953 bw ( KiB/s): min=16296, max=42307, per=86.83%, avg=28842.36, stdev=8249.24, samples=11 00:36:17.953 iops : min= 4074, max=10576, avg=7210.45, stdev=2062.10, samples=11 00:36:17.953 lat (usec) : 500=0.02%, 750=0.03%, 1000=0.04% 00:36:17.953 lat (msec) : 2=0.29%, 4=7.40%, 10=90.86%, 20=1.36% 00:36:17.953 cpu : usr=6.23%, sys=29.91%, ctx=9075, majf=0, minf=145 00:36:17.953 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:36:17.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.953 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:17.953 issued rwts: total=80764,40641,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.953 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:17.953 00:36:17.953 Run status group 0 (all jobs): 00:36:17.953 READ: bw=52.5MiB/s (55.1MB/s), 52.5MiB/s-52.5MiB/s (55.1MB/s-55.1MB/s), io=315MiB (331MB), run=6005-6005msec 00:36:17.953 WRITE: bw=32.4MiB/s (34.0MB/s), 32.4MiB/s-32.4MiB/s (34.0MB/s-34.0MB/s), io=159MiB (166MB), run=4894-4894msec 00:36:17.953 00:36:17.953 Disk stats (read/write): 00:36:17.953 nvme0n1: ios=79844/39847, merge=0/0, ticks=479301/194305, in_queue=673606, util=98.65% 00:36:17.953 08:32:51 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:17.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:36:17.953 08:32:51 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:17.953 08:32:51 -- common/autotest_common.sh@1198 -- # local i=0 00:36:17.953 08:32:51 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:17.953 08:32:51 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:36:17.953 08:32:51 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:36:17.953 08:32:51 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:17.953 08:32:51 -- common/autotest_common.sh@1210 -- # return 0 00:36:17.953 08:32:51 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:18.211 08:32:51 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:36:18.211 08:32:51 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:36:18.211 08:32:51 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:36:18.211 08:32:51 -- target/multipath.sh@144 -- # nvmftestfini 00:36:18.211 08:32:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:36:18.211 08:32:51 -- nvmf/common.sh@116 -- # sync 00:36:18.211 08:32:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:36:18.211 08:32:51 -- nvmf/common.sh@119 -- # set +e 00:36:18.211 08:32:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:36:18.211 08:32:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:36:18.211 rmmod nvme_tcp 00:36:18.211 rmmod nvme_fabrics 00:36:18.211 rmmod nvme_keyring 00:36:18.211 08:32:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:36:18.211 08:32:51 -- nvmf/common.sh@123 -- # set -e 00:36:18.211 08:32:51 -- nvmf/common.sh@124 -- # return 0 00:36:18.211 08:32:51 -- nvmf/common.sh@477 -- # '[' -n 73275 ']' 00:36:18.211 08:32:51 -- nvmf/common.sh@478 -- # killprocess 73275 00:36:18.211 08:32:51 -- common/autotest_common.sh@926 -- # '[' -z 73275 ']' 00:36:18.211 08:32:51 -- common/autotest_common.sh@930 -- # kill -0 73275 00:36:18.211 08:32:51 -- common/autotest_common.sh@931 -- # uname 00:36:18.211 08:32:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:36:18.211 08:32:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73275 00:36:18.211 killing process with pid 73275 00:36:18.211 08:32:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:36:18.211 08:32:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:36:18.211 08:32:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73275' 00:36:18.211 08:32:51 -- common/autotest_common.sh@945 -- # kill 73275 00:36:18.211 08:32:51 -- common/autotest_common.sh@950 -- # wait 73275 00:36:18.469 08:32:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:36:18.469 08:32:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:36:18.469 08:32:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:36:18.469 08:32:51 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:18.469 08:32:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:36:18.469 08:32:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:18.469 08:32:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:18.469 08:32:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:18.469 08:32:51 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:36:18.469 00:36:18.469 real 0m20.167s 00:36:18.469 user 1m18.992s 00:36:18.469 sys 0m6.569s 00:36:18.469 08:32:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:18.469 08:32:51 -- common/autotest_common.sh@10 -- # set +x 00:36:18.469 ************************************ 00:36:18.469 END TEST nvmf_multipath 00:36:18.469 ************************************ 00:36:18.728 08:32:51 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:36:18.728 08:32:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:36:18.728 08:32:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:36:18.728 08:32:51 -- common/autotest_common.sh@10 -- # set +x 00:36:18.728 ************************************ 00:36:18.728 START TEST nvmf_zcopy 00:36:18.728 ************************************ 00:36:18.728 08:32:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:36:18.728 * Looking for test storage... 00:36:18.728 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:36:18.728 08:32:51 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:18.728 08:32:51 -- nvmf/common.sh@7 -- # uname -s 00:36:18.728 08:32:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:18.728 08:32:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:18.728 08:32:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:18.728 08:32:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:18.728 08:32:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:18.728 08:32:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:18.728 08:32:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:18.728 08:32:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:18.728 08:32:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:18.728 08:32:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:18.728 08:32:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:36:18.728 08:32:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:36:18.728 08:32:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:18.728 08:32:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:18.728 08:32:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:18.728 08:32:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:18.728 08:32:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:18.728 08:32:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:18.728 08:32:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:18.728 08:32:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.729 08:32:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.729 08:32:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.729 08:32:51 -- paths/export.sh@5 -- # export PATH 00:36:18.729 08:32:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.729 08:32:51 -- nvmf/common.sh@46 -- # : 0 00:36:18.729 08:32:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:36:18.729 08:32:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:36:18.729 08:32:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:36:18.729 08:32:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:18.729 08:32:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:18.729 08:32:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:36:18.729 08:32:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:36:18.729 08:32:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:36:18.729 08:32:51 -- target/zcopy.sh@12 -- # nvmftestinit 00:36:18.729 08:32:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:36:18.729 08:32:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:18.729 08:32:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:36:18.729 08:32:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:36:18.729 08:32:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:36:18.729 08:32:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:18.729 08:32:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:18.729 08:32:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:18.729 08:32:51 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:36:18.729 08:32:51 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:36:18.729 08:32:51 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:36:18.729 08:32:51 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:36:18.729 08:32:51 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:36:18.729 08:32:51 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:36:18.729 08:32:51 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:18.729 08:32:52 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:18.729 08:32:52 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:36:18.729 08:32:52 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:36:18.729 08:32:52 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:36:18.729 08:32:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:36:18.729 08:32:52 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:36:18.729 08:32:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:18.729 08:32:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:36:18.729 08:32:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:36:18.729 08:32:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:36:18.729 08:32:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:36:18.729 08:32:52 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:36:18.729 08:32:52 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:36:18.729 Cannot find device "nvmf_tgt_br" 00:36:18.729 08:32:52 -- nvmf/common.sh@154 -- # true 00:36:18.729 08:32:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:36:18.729 Cannot find device "nvmf_tgt_br2" 00:36:18.729 08:32:52 -- nvmf/common.sh@155 -- # true 00:36:18.729 08:32:52 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:36:18.729 08:32:52 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:36:18.988 Cannot find device "nvmf_tgt_br" 00:36:18.988 08:32:52 -- nvmf/common.sh@157 -- # true 00:36:18.988 08:32:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:36:18.988 Cannot find device "nvmf_tgt_br2" 00:36:18.988 08:32:52 -- nvmf/common.sh@158 -- # true 00:36:18.988 08:32:52 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:36:18.988 08:32:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:36:18.988 08:32:52 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:18.988 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:18.988 08:32:52 -- nvmf/common.sh@161 -- # true 00:36:18.988 08:32:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:18.988 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:18.988 08:32:52 -- nvmf/common.sh@162 -- # true 00:36:18.988 08:32:52 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:36:18.988 08:32:52 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:36:18.988 08:32:52 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:36:18.988 08:32:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:36:18.988 08:32:52 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:36:18.988 08:32:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:36:18.988 08:32:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:36:18.988 08:32:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:36:18.988 08:32:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:36:18.988 08:32:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:36:18.988 08:32:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:36:18.988 08:32:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:36:18.988 08:32:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:36:18.988 08:32:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:36:18.988 08:32:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:36:18.988 08:32:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:36:18.988 08:32:52 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:36:18.988 08:32:52 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:36:18.988 08:32:52 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:36:18.988 08:32:52 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:18.988 08:32:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:18.988 08:32:52 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:18.988 08:32:52 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:19.247 08:32:52 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:36:19.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:19.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:36:19.247 00:36:19.247 --- 10.0.0.2 ping statistics --- 00:36:19.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:19.247 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:36:19.247 08:32:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:36:19.248 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:19.248 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:36:19.248 00:36:19.248 --- 10.0.0.3 ping statistics --- 00:36:19.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:19.248 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:36:19.248 08:32:52 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:19.248 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:19.248 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:36:19.248 00:36:19.248 --- 10.0.0.1 ping statistics --- 00:36:19.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:19.248 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:36:19.248 08:32:52 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:19.248 08:32:52 -- nvmf/common.sh@421 -- # return 0 00:36:19.248 08:32:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:36:19.248 08:32:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:19.248 08:32:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:36:19.248 08:32:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:36:19.248 08:32:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:19.248 08:32:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:36:19.248 08:32:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:36:19.248 08:32:52 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:36:19.248 08:32:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:36:19.248 08:32:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:36:19.248 08:32:52 -- common/autotest_common.sh@10 -- # set +x 00:36:19.248 08:32:52 -- nvmf/common.sh@469 -- # nvmfpid=73858 00:36:19.248 08:32:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:36:19.248 08:32:52 -- nvmf/common.sh@470 -- # waitforlisten 73858 00:36:19.248 08:32:52 -- common/autotest_common.sh@819 -- # '[' -z 73858 ']' 00:36:19.248 08:32:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:19.248 08:32:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:36:19.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:19.248 08:32:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:19.248 08:32:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:36:19.248 08:32:52 -- common/autotest_common.sh@10 -- # set +x 00:36:19.248 [2024-04-17 08:32:52.420612] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:36:19.248 [2024-04-17 08:32:52.420714] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:19.248 [2024-04-17 08:32:52.564309] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:19.506 [2024-04-17 08:32:52.665382] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:36:19.506 [2024-04-17 08:32:52.665552] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:19.506 [2024-04-17 08:32:52.665560] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:19.506 [2024-04-17 08:32:52.665578] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:19.506 [2024-04-17 08:32:52.665605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:20.075 08:32:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:36:20.075 08:32:53 -- common/autotest_common.sh@852 -- # return 0 00:36:20.075 08:32:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:36:20.075 08:32:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:36:20.075 08:32:53 -- common/autotest_common.sh@10 -- # set +x 00:36:20.075 08:32:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:20.075 08:32:53 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:36:20.075 08:32:53 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:36:20.075 08:32:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:20.075 08:32:53 -- common/autotest_common.sh@10 -- # set +x 00:36:20.075 [2024-04-17 08:32:53.357652] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:20.075 08:32:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:20.075 08:32:53 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:20.075 08:32:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:20.075 08:32:53 -- common/autotest_common.sh@10 -- # set +x 00:36:20.075 08:32:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:20.075 08:32:53 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:20.075 08:32:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:20.075 08:32:53 -- common/autotest_common.sh@10 -- # set +x 00:36:20.075 [2024-04-17 08:32:53.381679] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:20.075 08:32:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:20.075 08:32:53 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:20.075 08:32:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:20.075 08:32:53 -- common/autotest_common.sh@10 -- # set +x 00:36:20.075 08:32:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:20.075 08:32:53 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:36:20.075 08:32:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:20.075 08:32:53 -- common/autotest_common.sh@10 -- # set +x 00:36:20.333 malloc0 00:36:20.333 08:32:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:20.333 08:32:53 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:36:20.333 08:32:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:20.333 08:32:53 -- common/autotest_common.sh@10 -- # set +x 00:36:20.333 08:32:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:20.333 08:32:53 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:36:20.333 08:32:53 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:36:20.333 08:32:53 -- nvmf/common.sh@520 -- # config=() 00:36:20.333 08:32:53 -- nvmf/common.sh@520 -- # local subsystem config 00:36:20.334 08:32:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:36:20.334 08:32:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:36:20.334 { 00:36:20.334 "params": { 00:36:20.334 "name": "Nvme$subsystem", 00:36:20.334 "trtype": "$TEST_TRANSPORT", 00:36:20.334 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:20.334 "adrfam": "ipv4", 00:36:20.334 "trsvcid": "$NVMF_PORT", 00:36:20.334 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:20.334 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:20.334 "hdgst": ${hdgst:-false}, 00:36:20.334 "ddgst": ${ddgst:-false} 00:36:20.334 }, 00:36:20.334 "method": "bdev_nvme_attach_controller" 00:36:20.334 } 00:36:20.334 EOF 00:36:20.334 )") 00:36:20.334 08:32:53 -- nvmf/common.sh@542 -- # cat 00:36:20.334 08:32:53 -- nvmf/common.sh@544 -- # jq . 00:36:20.334 08:32:53 -- nvmf/common.sh@545 -- # IFS=, 00:36:20.334 08:32:53 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:36:20.334 "params": { 00:36:20.334 "name": "Nvme1", 00:36:20.334 "trtype": "tcp", 00:36:20.334 "traddr": "10.0.0.2", 00:36:20.334 "adrfam": "ipv4", 00:36:20.334 "trsvcid": "4420", 00:36:20.334 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:20.334 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:20.334 "hdgst": false, 00:36:20.334 "ddgst": false 00:36:20.334 }, 00:36:20.334 "method": "bdev_nvme_attach_controller" 00:36:20.334 }' 00:36:20.334 [2024-04-17 08:32:53.484265] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:36:20.334 [2024-04-17 08:32:53.484354] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73914 ] 00:36:20.334 [2024-04-17 08:32:53.623864] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:20.592 [2024-04-17 08:32:53.724241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:20.592 Running I/O for 10 seconds... 00:36:30.565 00:36:30.565 Latency(us) 00:36:30.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:30.565 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:36:30.565 Verification LBA range: start 0x0 length 0x1000 00:36:30.565 Nvme1n1 : 10.01 10783.27 84.24 0.00 0.00 11841.03 1337.91 22894.67 00:36:30.565 =================================================================================================================== 00:36:30.565 Total : 10783.27 84.24 0.00 0.00 11841.03 1337.91 22894.67 00:36:30.824 08:33:04 -- target/zcopy.sh@39 -- # perfpid=74031 00:36:30.824 08:33:04 -- target/zcopy.sh@41 -- # xtrace_disable 00:36:30.824 08:33:04 -- common/autotest_common.sh@10 -- # set +x 00:36:30.824 08:33:04 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:36:30.824 08:33:04 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:36:30.824 08:33:04 -- nvmf/common.sh@520 -- # config=() 00:36:30.824 08:33:04 -- nvmf/common.sh@520 -- # local subsystem config 00:36:30.824 08:33:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:36:30.824 08:33:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:36:30.824 { 00:36:30.824 "params": { 00:36:30.824 "name": "Nvme$subsystem", 00:36:30.824 "trtype": "$TEST_TRANSPORT", 00:36:30.824 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:30.824 "adrfam": "ipv4", 00:36:30.824 "trsvcid": "$NVMF_PORT", 00:36:30.824 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:30.824 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:30.824 "hdgst": ${hdgst:-false}, 00:36:30.824 "ddgst": ${ddgst:-false} 00:36:30.824 }, 00:36:30.824 "method": "bdev_nvme_attach_controller" 00:36:30.824 } 00:36:30.824 EOF 00:36:30.824 )") 00:36:30.824 08:33:04 -- nvmf/common.sh@542 -- # cat 00:36:30.824 [2024-04-17 08:33:04.109265] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.824 [2024-04-17 08:33:04.109297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.824 08:33:04 -- nvmf/common.sh@544 -- # jq . 00:36:30.824 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.824 08:33:04 -- nvmf/common.sh@545 -- # IFS=, 00:36:30.824 08:33:04 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:36:30.824 "params": { 00:36:30.824 "name": "Nvme1", 00:36:30.824 "trtype": "tcp", 00:36:30.824 "traddr": "10.0.0.2", 00:36:30.824 "adrfam": "ipv4", 00:36:30.824 "trsvcid": "4420", 00:36:30.824 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:30.824 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:30.824 "hdgst": false, 00:36:30.824 "ddgst": false 00:36:30.824 }, 00:36:30.824 "method": "bdev_nvme_attach_controller" 00:36:30.824 }' 00:36:30.824 [2024-04-17 08:33:04.121223] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.824 [2024-04-17 08:33:04.121242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.824 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.824 [2024-04-17 08:33:04.133212] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.824 [2024-04-17 08:33:04.133229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.824 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:30.824 [2024-04-17 08:33:04.145172] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.824 [2024-04-17 08:33:04.145188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.824 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.086 [2024-04-17 08:33:04.155714] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:36:31.086 [2024-04-17 08:33:04.155765] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74031 ] 00:36:31.086 [2024-04-17 08:33:04.161154] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.086 [2024-04-17 08:33:04.161174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.087 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.087 [2024-04-17 08:33:04.173131] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.087 [2024-04-17 08:33:04.173149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.087 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.087 [2024-04-17 08:33:04.185121] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.087 [2024-04-17 08:33:04.185138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.087 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.087 [2024-04-17 08:33:04.197102] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.087 [2024-04-17 08:33:04.197117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.087 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.087 [2024-04-17 08:33:04.209070] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.087 [2024-04-17 08:33:04.209086] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.087 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.087 [2024-04-17 08:33:04.221050] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.087 [2024-04-17 08:33:04.221065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.087 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.087 [2024-04-17 08:33:04.233032] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.087 [2024-04-17 08:33:04.233047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.087 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.087 [2024-04-17 08:33:04.245012] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.087 [2024-04-17 08:33:04.245027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.087 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.087 [2024-04-17 08:33:04.257006] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.087 [2024-04-17 08:33:04.257022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.087 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.087 [2024-04-17 08:33:04.268974] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.087 [2024-04-17 08:33:04.268989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.087 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.087 [2024-04-17 08:33:04.280973] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.087 [2024-04-17 08:33:04.280990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.087 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.087 [2024-04-17 08:33:04.292933] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.087 [2024-04-17 08:33:04.292949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.087 [2024-04-17 08:33:04.293343] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:31.087 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.087 [2024-04-17 08:33:04.304924] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.088 [2024-04-17 08:33:04.304950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.088 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.088 [2024-04-17 08:33:04.316901] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.088 [2024-04-17 08:33:04.316917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.088 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.088 [2024-04-17 08:33:04.328886] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.088 [2024-04-17 08:33:04.328905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.088 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.088 [2024-04-17 08:33:04.340873] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.088 [2024-04-17 08:33:04.340893] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.088 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.088 [2024-04-17 08:33:04.352847] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.088 [2024-04-17 08:33:04.352866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.088 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.088 [2024-04-17 08:33:04.364824] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.088 [2024-04-17 08:33:04.364842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.088 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.088 [2024-04-17 08:33:04.376804] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.088 [2024-04-17 08:33:04.376822] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.088 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.088 [2024-04-17 08:33:04.388779] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.088 [2024-04-17 08:33:04.388794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.088 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.088 [2024-04-17 08:33:04.398216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:31.088 [2024-04-17 08:33:04.400759] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.088 [2024-04-17 08:33:04.400775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.088 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.088 [2024-04-17 08:33:04.412744] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.088 [2024-04-17 08:33:04.412763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.376 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.376 [2024-04-17 08:33:04.424730] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.376 [2024-04-17 08:33:04.424750] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.376 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.376 [2024-04-17 08:33:04.436708] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.376 [2024-04-17 08:33:04.436726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.376 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.376 [2024-04-17 08:33:04.448689] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.376 [2024-04-17 08:33:04.448707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.377 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.377 [2024-04-17 08:33:04.460683] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.377 [2024-04-17 08:33:04.460705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.377 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.377 [2024-04-17 08:33:04.472652] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.377 [2024-04-17 08:33:04.472669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.377 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.377 [2024-04-17 08:33:04.484650] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.377 [2024-04-17 08:33:04.484674] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.377 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.377 [2024-04-17 08:33:04.492636] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.377 [2024-04-17 08:33:04.492667] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.377 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.377 [2024-04-17 08:33:04.504622] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.377 [2024-04-17 08:33:04.504645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.377 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.377 [2024-04-17 08:33:04.516625] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.377 [2024-04-17 08:33:04.516650] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.377 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.377 [2024-04-17 08:33:04.528608] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.377 [2024-04-17 08:33:04.528632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.377 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.377 [2024-04-17 08:33:04.540570] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.377 [2024-04-17 08:33:04.540598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.377 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.377 [2024-04-17 08:33:04.552563] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.377 [2024-04-17 08:33:04.552595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.377 Running I/O for 5 seconds... 00:36:31.377 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.377 [2024-04-17 08:33:04.569198] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.377 [2024-04-17 08:33:04.569229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.377 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.377 [2024-04-17 08:33:04.586033] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.377 [2024-04-17 08:33:04.586065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.377 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.377 [2024-04-17 08:33:04.602643] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.377 [2024-04-17 08:33:04.602678] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.377 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.377 [2024-04-17 08:33:04.613149] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.377 [2024-04-17 08:33:04.613180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.377 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.377 [2024-04-17 08:33:04.621224] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.377 [2024-04-17 08:33:04.621252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.377 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.377 [2024-04-17 08:33:04.632258] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.377 [2024-04-17 08:33:04.632291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.377 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.377 [2024-04-17 08:33:04.640589] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.377 [2024-04-17 08:33:04.640621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.377 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.377 [2024-04-17 08:33:04.652320] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.377 [2024-04-17 08:33:04.652352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.377 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.377 [2024-04-17 08:33:04.660712] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.377 [2024-04-17 08:33:04.660744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.377 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.377 [2024-04-17 08:33:04.669915] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.377 [2024-04-17 08:33:04.669947] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.377 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.377 [2024-04-17 08:33:04.678846] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.377 [2024-04-17 08:33:04.678876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.377 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.377 [2024-04-17 08:33:04.687525] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.377 [2024-04-17 08:33:04.687551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.377 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.377 [2024-04-17 08:33:04.696385] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.377 [2024-04-17 08:33:04.696420] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.377 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.377 [2024-04-17 08:33:04.706794] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.377 [2024-04-17 08:33:04.706826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.637 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.637 [2024-04-17 08:33:04.716826] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.637 [2024-04-17 08:33:04.716858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.637 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.637 [2024-04-17 08:33:04.724264] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.637 [2024-04-17 08:33:04.724290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.637 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.637 [2024-04-17 08:33:04.735482] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.637 [2024-04-17 08:33:04.735512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.637 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.637 [2024-04-17 08:33:04.745112] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.637 [2024-04-17 08:33:04.745139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.637 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.637 [2024-04-17 08:33:04.754187] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.637 [2024-04-17 08:33:04.754217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.637 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.637 [2024-04-17 08:33:04.770158] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.637 [2024-04-17 08:33:04.770189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.637 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.637 [2024-04-17 08:33:04.786609] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.637 [2024-04-17 08:33:04.786642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.637 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.638 [2024-04-17 08:33:04.803250] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.638 [2024-04-17 08:33:04.803284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.638 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.638 [2024-04-17 08:33:04.820187] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.638 [2024-04-17 08:33:04.820218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.638 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.638 [2024-04-17 08:33:04.836059] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.638 [2024-04-17 08:33:04.836098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.638 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.638 [2024-04-17 08:33:04.847462] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.638 [2024-04-17 08:33:04.847492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.638 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.638 [2024-04-17 08:33:04.863584] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.638 [2024-04-17 08:33:04.863628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.638 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.638 [2024-04-17 08:33:04.878759] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.638 [2024-04-17 08:33:04.878807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.638 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.638 [2024-04-17 08:33:04.894126] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.638 [2024-04-17 08:33:04.894161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.638 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.638 [2024-04-17 08:33:04.909484] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.638 [2024-04-17 08:33:04.909524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.638 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.638 [2024-04-17 08:33:04.924454] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.638 [2024-04-17 08:33:04.924490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.638 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.638 [2024-04-17 08:33:04.940123] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.638 [2024-04-17 08:33:04.940159] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.638 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.638 [2024-04-17 08:33:04.953802] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.638 [2024-04-17 08:33:04.953839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.638 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.898 [2024-04-17 08:33:04.969121] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.898 [2024-04-17 08:33:04.969153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.898 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.898 [2024-04-17 08:33:04.985812] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.898 [2024-04-17 08:33:04.985846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.898 2024/04/17 08:33:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.898 [2024-04-17 08:33:05.001372] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.898 [2024-04-17 08:33:05.001415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.898 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.898 [2024-04-17 08:33:05.015852] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.898 [2024-04-17 08:33:05.015880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.898 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.898 [2024-04-17 08:33:05.031257] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.898 [2024-04-17 08:33:05.031288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.898 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.898 [2024-04-17 08:33:05.047226] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.898 [2024-04-17 08:33:05.047259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.898 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.898 [2024-04-17 08:33:05.061373] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.898 [2024-04-17 08:33:05.061411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.898 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.898 [2024-04-17 08:33:05.076513] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.898 [2024-04-17 08:33:05.076542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.898 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.898 [2024-04-17 08:33:05.087971] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.898 [2024-04-17 08:33:05.088001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.898 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.898 [2024-04-17 08:33:05.097628] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.898 [2024-04-17 08:33:05.097662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.898 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.898 [2024-04-17 08:33:05.106892] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.898 [2024-04-17 08:33:05.106920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.898 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.898 [2024-04-17 08:33:05.114547] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.898 [2024-04-17 08:33:05.114577] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.898 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.898 [2024-04-17 08:33:05.125483] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.898 [2024-04-17 08:33:05.125519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.898 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.898 [2024-04-17 08:33:05.135035] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.898 [2024-04-17 08:33:05.135069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.898 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.898 [2024-04-17 08:33:05.144279] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.898 [2024-04-17 08:33:05.144308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.898 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.898 [2024-04-17 08:33:05.153755] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.898 [2024-04-17 08:33:05.153791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.898 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.898 [2024-04-17 08:33:05.163115] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.898 [2024-04-17 08:33:05.163148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.899 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.899 [2024-04-17 08:33:05.172390] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.899 [2024-04-17 08:33:05.172429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.899 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.899 [2024-04-17 08:33:05.181749] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.899 [2024-04-17 08:33:05.181788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.899 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.899 [2024-04-17 08:33:05.191145] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.899 [2024-04-17 08:33:05.191183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.899 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.899 [2024-04-17 08:33:05.200532] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.899 [2024-04-17 08:33:05.200569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.899 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.899 [2024-04-17 08:33:05.210868] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.899 [2024-04-17 08:33:05.210907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.899 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:31.899 [2024-04-17 08:33:05.218370] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.899 [2024-04-17 08:33:05.218413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.899 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.159 [2024-04-17 08:33:05.229916] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.159 [2024-04-17 08:33:05.229954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.159 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.159 [2024-04-17 08:33:05.238520] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.159 [2024-04-17 08:33:05.238551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.159 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.159 [2024-04-17 08:33:05.250615] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.159 [2024-04-17 08:33:05.250651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.159 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.159 [2024-04-17 08:33:05.260888] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.159 [2024-04-17 08:33:05.260922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.159 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.159 [2024-04-17 08:33:05.270184] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.159 [2024-04-17 08:33:05.270220] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.159 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.159 [2024-04-17 08:33:05.277434] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.159 [2024-04-17 08:33:05.277510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.159 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.159 [2024-04-17 08:33:05.288211] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.159 [2024-04-17 08:33:05.288287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.159 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.159 [2024-04-17 08:33:05.297945] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.159 [2024-04-17 08:33:05.297980] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.159 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.159 [2024-04-17 08:33:05.307247] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.159 [2024-04-17 08:33:05.307327] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.159 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.159 [2024-04-17 08:33:05.317146] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.159 [2024-04-17 08:33:05.317234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.159 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.159 [2024-04-17 08:33:05.326702] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.159 [2024-04-17 08:33:05.326791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.159 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.159 [2024-04-17 08:33:05.334012] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.159 [2024-04-17 08:33:05.334090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.159 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.159 [2024-04-17 08:33:05.345262] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.159 [2024-04-17 08:33:05.345347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.159 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.159 [2024-04-17 08:33:05.355124] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.159 [2024-04-17 08:33:05.355213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.159 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.159 [2024-04-17 08:33:05.364520] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.160 [2024-04-17 08:33:05.364600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.160 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.160 [2024-04-17 08:33:05.373968] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.160 [2024-04-17 08:33:05.374043] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.160 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.160 [2024-04-17 08:33:05.383274] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.160 [2024-04-17 08:33:05.383348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.160 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.160 [2024-04-17 08:33:05.390989] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.160 [2024-04-17 08:33:05.391063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.160 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.160 [2024-04-17 08:33:05.407044] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.160 [2024-04-17 08:33:05.407155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.160 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.160 [2024-04-17 08:33:05.419026] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.160 [2024-04-17 08:33:05.419133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.160 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.160 [2024-04-17 08:33:05.435608] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.160 [2024-04-17 08:33:05.435729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.160 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.160 [2024-04-17 08:33:05.451517] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.160 [2024-04-17 08:33:05.451566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.160 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.160 [2024-04-17 08:33:05.462779] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.160 [2024-04-17 08:33:05.462886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.160 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.160 [2024-04-17 08:33:05.479254] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.160 [2024-04-17 08:33:05.479361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.160 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.420 [2024-04-17 08:33:05.495178] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.420 [2024-04-17 08:33:05.495277] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.420 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.420 [2024-04-17 08:33:05.511200] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.420 [2024-04-17 08:33:05.511297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.420 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.420 [2024-04-17 08:33:05.522774] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.420 [2024-04-17 08:33:05.522861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.420 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.420 [2024-04-17 08:33:05.538451] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.420 [2024-04-17 08:33:05.538546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.420 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.420 [2024-04-17 08:33:05.554128] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.420 [2024-04-17 08:33:05.554243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.420 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.420 [2024-04-17 08:33:05.567945] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.420 [2024-04-17 08:33:05.568056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.420 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.420 [2024-04-17 08:33:05.583985] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.420 [2024-04-17 08:33:05.584096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.420 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.420 [2024-04-17 08:33:05.600471] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.420 [2024-04-17 08:33:05.600584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.420 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.420 [2024-04-17 08:33:05.617127] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.420 [2024-04-17 08:33:05.617226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.420 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.420 [2024-04-17 08:33:05.633509] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.420 [2024-04-17 08:33:05.633548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.420 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.420 [2024-04-17 08:33:05.650452] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.420 [2024-04-17 08:33:05.650495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.420 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.420 [2024-04-17 08:33:05.667358] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.420 [2024-04-17 08:33:05.667476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.420 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.420 [2024-04-17 08:33:05.684072] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.420 [2024-04-17 08:33:05.684115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.420 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.420 [2024-04-17 08:33:05.700771] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.420 [2024-04-17 08:33:05.700864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.420 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.420 [2024-04-17 08:33:05.717521] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.420 [2024-04-17 08:33:05.717631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.420 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.420 [2024-04-17 08:33:05.733880] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.420 [2024-04-17 08:33:05.733920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.420 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.420 [2024-04-17 08:33:05.749960] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.420 [2024-04-17 08:33:05.750001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.680 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.680 [2024-04-17 08:33:05.766226] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.680 [2024-04-17 08:33:05.766324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.680 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.680 [2024-04-17 08:33:05.778385] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.680 [2024-04-17 08:33:05.778432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.680 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.680 [2024-04-17 08:33:05.794698] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.680 [2024-04-17 08:33:05.794805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.680 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.680 [2024-04-17 08:33:05.810754] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.680 [2024-04-17 08:33:05.810864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.680 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.680 [2024-04-17 08:33:05.827791] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.680 [2024-04-17 08:33:05.827838] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.680 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.680 [2024-04-17 08:33:05.844007] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.680 [2024-04-17 08:33:05.844100] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.680 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.680 [2024-04-17 08:33:05.860543] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.680 [2024-04-17 08:33:05.860633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.680 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.680 [2024-04-17 08:33:05.876935] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.680 [2024-04-17 08:33:05.876975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.680 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.680 [2024-04-17 08:33:05.893319] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.680 [2024-04-17 08:33:05.893419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.680 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.680 [2024-04-17 08:33:05.908605] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.680 [2024-04-17 08:33:05.908694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.680 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.680 [2024-04-17 08:33:05.923411] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.680 [2024-04-17 08:33:05.923446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.680 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.680 [2024-04-17 08:33:05.939954] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.680 [2024-04-17 08:33:05.939998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.680 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.680 [2024-04-17 08:33:05.951338] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.680 [2024-04-17 08:33:05.951438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.680 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.680 [2024-04-17 08:33:05.967502] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.680 [2024-04-17 08:33:05.967551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.680 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.680 [2024-04-17 08:33:05.983762] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.680 [2024-04-17 08:33:05.983879] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.681 2024/04/17 08:33:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.681 [2024-04-17 08:33:05.999497] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.681 [2024-04-17 08:33:05.999626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.681 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.941 [2024-04-17 08:33:06.010935] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.941 [2024-04-17 08:33:06.010978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.941 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.941 [2024-04-17 08:33:06.027062] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.941 [2024-04-17 08:33:06.027111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.941 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.941 [2024-04-17 08:33:06.042748] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.941 [2024-04-17 08:33:06.042872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.941 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.941 [2024-04-17 08:33:06.053263] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.941 [2024-04-17 08:33:06.053311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.941 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.941 [2024-04-17 08:33:06.069371] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.941 [2024-04-17 08:33:06.069497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.941 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.941 [2024-04-17 08:33:06.085301] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.941 [2024-04-17 08:33:06.085405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.941 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.941 [2024-04-17 08:33:06.102433] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.941 [2024-04-17 08:33:06.102475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.941 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.941 [2024-04-17 08:33:06.118822] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.941 [2024-04-17 08:33:06.118862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.941 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.941 [2024-04-17 08:33:06.135002] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.941 [2024-04-17 08:33:06.135097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.941 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.941 [2024-04-17 08:33:06.151006] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.941 [2024-04-17 08:33:06.151108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.941 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.941 [2024-04-17 08:33:06.165758] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.941 [2024-04-17 08:33:06.165839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.941 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.941 [2024-04-17 08:33:06.180950] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.941 [2024-04-17 08:33:06.181036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.941 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.941 [2024-04-17 08:33:06.197074] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.941 [2024-04-17 08:33:06.197176] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.941 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.941 [2024-04-17 08:33:06.214258] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.941 [2024-04-17 08:33:06.214375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.941 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.941 [2024-04-17 08:33:06.230322] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.941 [2024-04-17 08:33:06.230444] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.941 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.941 [2024-04-17 08:33:06.247245] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.941 [2024-04-17 08:33:06.247353] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.941 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:32.941 [2024-04-17 08:33:06.263155] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:32.941 [2024-04-17 08:33:06.263243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:32.941 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.201 [2024-04-17 08:33:06.279372] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.201 [2024-04-17 08:33:06.279476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.201 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.201 [2024-04-17 08:33:06.290788] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.201 [2024-04-17 08:33:06.290885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.201 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.201 [2024-04-17 08:33:06.306285] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.201 [2024-04-17 08:33:06.306380] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.201 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.201 [2024-04-17 08:33:06.322860] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.201 [2024-04-17 08:33:06.322961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.201 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.201 [2024-04-17 08:33:06.339762] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.201 [2024-04-17 08:33:06.339876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.201 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.201 [2024-04-17 08:33:06.356667] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.201 [2024-04-17 08:33:06.356773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.201 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.201 [2024-04-17 08:33:06.373127] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.201 [2024-04-17 08:33:06.373227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.201 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.201 [2024-04-17 08:33:06.389257] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.201 [2024-04-17 08:33:06.389307] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.201 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.201 [2024-04-17 08:33:06.402042] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.201 [2024-04-17 08:33:06.402174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.201 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.201 [2024-04-17 08:33:06.414325] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.201 [2024-04-17 08:33:06.414379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.201 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.201 [2024-04-17 08:33:06.429556] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.201 [2024-04-17 08:33:06.429620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.201 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.201 [2024-04-17 08:33:06.441857] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.201 [2024-04-17 08:33:06.441912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.201 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.201 [2024-04-17 08:33:06.457472] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.201 [2024-04-17 08:33:06.457519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.201 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.201 [2024-04-17 08:33:06.473816] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.201 [2024-04-17 08:33:06.473862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.201 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.201 [2024-04-17 08:33:06.490125] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.201 [2024-04-17 08:33:06.490173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.201 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.201 [2024-04-17 08:33:06.506249] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.201 [2024-04-17 08:33:06.506298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.201 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.201 [2024-04-17 08:33:06.520764] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.201 [2024-04-17 08:33:06.520811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.201 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.461 [2024-04-17 08:33:06.536594] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.461 [2024-04-17 08:33:06.536640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.461 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.461 [2024-04-17 08:33:06.553606] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.461 [2024-04-17 08:33:06.553660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.461 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.461 [2024-04-17 08:33:06.570284] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.461 [2024-04-17 08:33:06.570336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.461 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.461 [2024-04-17 08:33:06.586726] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.461 [2024-04-17 08:33:06.586771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.461 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.461 [2024-04-17 08:33:06.602726] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.461 [2024-04-17 08:33:06.602778] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.461 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.461 [2024-04-17 08:33:06.617077] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.461 [2024-04-17 08:33:06.617121] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.461 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.461 [2024-04-17 08:33:06.632348] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.461 [2024-04-17 08:33:06.632410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.461 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.461 [2024-04-17 08:33:06.648984] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.461 [2024-04-17 08:33:06.649040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.461 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.461 [2024-04-17 08:33:06.665397] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.461 [2024-04-17 08:33:06.665460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.461 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.461 [2024-04-17 08:33:06.682258] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.461 [2024-04-17 08:33:06.682305] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.461 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.461 [2024-04-17 08:33:06.698925] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.461 [2024-04-17 08:33:06.698969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.461 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.461 [2024-04-17 08:33:06.715268] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.461 [2024-04-17 08:33:06.715312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.461 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.461 [2024-04-17 08:33:06.732072] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.461 [2024-04-17 08:33:06.732114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.461 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.461 [2024-04-17 08:33:06.748811] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.461 [2024-04-17 08:33:06.748852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.461 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.461 [2024-04-17 08:33:06.765560] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.461 [2024-04-17 08:33:06.765608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.461 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.461 [2024-04-17 08:33:06.782108] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.461 [2024-04-17 08:33:06.782152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.461 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.721 [2024-04-17 08:33:06.798645] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.721 [2024-04-17 08:33:06.798689] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.721 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.721 [2024-04-17 08:33:06.815156] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.721 [2024-04-17 08:33:06.815206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.721 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.721 [2024-04-17 08:33:06.832075] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.721 [2024-04-17 08:33:06.832115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.721 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.721 [2024-04-17 08:33:06.848488] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.721 [2024-04-17 08:33:06.848533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.721 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.721 [2024-04-17 08:33:06.865461] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.721 [2024-04-17 08:33:06.865503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.721 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.721 [2024-04-17 08:33:06.881454] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.721 [2024-04-17 08:33:06.881492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.721 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.721 [2024-04-17 08:33:06.897977] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.721 [2024-04-17 08:33:06.898016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.721 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.721 [2024-04-17 08:33:06.909966] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.721 [2024-04-17 08:33:06.910005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.721 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.721 [2024-04-17 08:33:06.925301] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.721 [2024-04-17 08:33:06.925341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.721 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.721 [2024-04-17 08:33:06.942229] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.721 [2024-04-17 08:33:06.942269] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.721 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.721 [2024-04-17 08:33:06.958382] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.721 [2024-04-17 08:33:06.958434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.721 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.721 [2024-04-17 08:33:06.970451] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.721 [2024-04-17 08:33:06.970491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.721 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.721 [2024-04-17 08:33:06.987327] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.721 [2024-04-17 08:33:06.987370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.721 2024/04/17 08:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.721 [2024-04-17 08:33:07.003760] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.721 [2024-04-17 08:33:07.003805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.721 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.721 [2024-04-17 08:33:07.019946] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.721 [2024-04-17 08:33:07.019990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.721 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.721 [2024-04-17 08:33:07.036334] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.721 [2024-04-17 08:33:07.036379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.721 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.982 [2024-04-17 08:33:07.052915] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.982 [2024-04-17 08:33:07.052959] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.982 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.982 [2024-04-17 08:33:07.069791] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.982 [2024-04-17 08:33:07.069832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.982 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.982 [2024-04-17 08:33:07.086633] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.982 [2024-04-17 08:33:07.086676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.982 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.982 [2024-04-17 08:33:07.103258] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.982 [2024-04-17 08:33:07.103300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.982 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.982 [2024-04-17 08:33:07.120192] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.982 [2024-04-17 08:33:07.120237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.982 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.982 [2024-04-17 08:33:07.136593] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.982 [2024-04-17 08:33:07.136638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.982 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.982 [2024-04-17 08:33:07.152829] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.982 [2024-04-17 08:33:07.152873] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.982 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.982 [2024-04-17 08:33:07.165255] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.982 [2024-04-17 08:33:07.165299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.982 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.982 [2024-04-17 08:33:07.177022] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.982 [2024-04-17 08:33:07.177061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.982 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.982 [2024-04-17 08:33:07.192725] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.982 [2024-04-17 08:33:07.192779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.982 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.982 [2024-04-17 08:33:07.209533] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.982 [2024-04-17 08:33:07.209583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.982 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.982 [2024-04-17 08:33:07.225866] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.982 [2024-04-17 08:33:07.225911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.982 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.982 [2024-04-17 08:33:07.242037] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.982 [2024-04-17 08:33:07.242077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.982 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.982 [2024-04-17 08:33:07.258389] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.982 [2024-04-17 08:33:07.258440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.982 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.982 [2024-04-17 08:33:07.270514] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.982 [2024-04-17 08:33:07.270550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.982 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.982 [2024-04-17 08:33:07.285792] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.982 [2024-04-17 08:33:07.285829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.982 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:33.982 [2024-04-17 08:33:07.302195] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:33.982 [2024-04-17 08:33:07.302243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:33.982 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.243 [2024-04-17 08:33:07.318924] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.243 [2024-04-17 08:33:07.318964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.243 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.243 [2024-04-17 08:33:07.335078] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.243 [2024-04-17 08:33:07.335133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.243 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.243 [2024-04-17 08:33:07.350883] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.243 [2024-04-17 08:33:07.350927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.243 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.243 [2024-04-17 08:33:07.365581] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.243 [2024-04-17 08:33:07.365627] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.243 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.243 [2024-04-17 08:33:07.377413] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.243 [2024-04-17 08:33:07.377553] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.243 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.243 [2024-04-17 08:33:07.393482] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.243 [2024-04-17 08:33:07.393595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.243 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.243 [2024-04-17 08:33:07.409518] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.243 [2024-04-17 08:33:07.409555] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.243 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.243 [2024-04-17 08:33:07.421715] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.243 [2024-04-17 08:33:07.421752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.243 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.244 [2024-04-17 08:33:07.438210] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.244 [2024-04-17 08:33:07.438303] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.244 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.244 [2024-04-17 08:33:07.454405] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.244 [2024-04-17 08:33:07.454454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.244 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.244 [2024-04-17 08:33:07.466514] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.244 [2024-04-17 08:33:07.466549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.244 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.244 [2024-04-17 08:33:07.482167] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.244 [2024-04-17 08:33:07.482262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.244 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.244 [2024-04-17 08:33:07.498424] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.244 [2024-04-17 08:33:07.498461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.244 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.244 [2024-04-17 08:33:07.515134] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.244 [2024-04-17 08:33:07.515175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.244 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.244 [2024-04-17 08:33:07.531796] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.244 [2024-04-17 08:33:07.531892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.244 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.244 [2024-04-17 08:33:07.548787] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.244 [2024-04-17 08:33:07.548827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.244 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.244 [2024-04-17 08:33:07.565822] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.244 [2024-04-17 08:33:07.565864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.244 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.504 [2024-04-17 08:33:07.582253] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.504 [2024-04-17 08:33:07.582347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.504 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.504 [2024-04-17 08:33:07.598338] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.504 [2024-04-17 08:33:07.598377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.504 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.504 [2024-04-17 08:33:07.609100] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.504 [2024-04-17 08:33:07.609135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.504 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.504 [2024-04-17 08:33:07.624181] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.504 [2024-04-17 08:33:07.624277] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.504 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.504 [2024-04-17 08:33:07.639718] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.504 [2024-04-17 08:33:07.639759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.504 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.504 [2024-04-17 08:33:07.654594] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.504 [2024-04-17 08:33:07.654642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.504 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.504 [2024-04-17 08:33:07.669753] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.504 [2024-04-17 08:33:07.669841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.504 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.504 [2024-04-17 08:33:07.681176] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.504 [2024-04-17 08:33:07.681210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.504 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.504 [2024-04-17 08:33:07.696464] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.504 [2024-04-17 08:33:07.696498] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.504 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.504 [2024-04-17 08:33:07.707646] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.504 [2024-04-17 08:33:07.707716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.504 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.504 [2024-04-17 08:33:07.723288] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.504 [2024-04-17 08:33:07.723324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.504 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.504 [2024-04-17 08:33:07.739495] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.504 [2024-04-17 08:33:07.739533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.504 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.504 [2024-04-17 08:33:07.755577] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.504 [2024-04-17 08:33:07.755661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.505 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.505 [2024-04-17 08:33:07.769965] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.505 [2024-04-17 08:33:07.770001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.505 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.505 [2024-04-17 08:33:07.785216] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.505 [2024-04-17 08:33:07.785251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.505 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.505 [2024-04-17 08:33:07.796644] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.505 [2024-04-17 08:33:07.796727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.505 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.505 [2024-04-17 08:33:07.812523] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.505 [2024-04-17 08:33:07.812563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.505 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.505 [2024-04-17 08:33:07.828756] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.505 [2024-04-17 08:33:07.828804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.505 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.765 [2024-04-17 08:33:07.840542] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.765 [2024-04-17 08:33:07.840623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.765 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.765 [2024-04-17 08:33:07.856470] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.765 [2024-04-17 08:33:07.856505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.765 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.765 [2024-04-17 08:33:07.881462] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.765 [2024-04-17 08:33:07.881561] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.765 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.765 [2024-04-17 08:33:07.892030] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.765 [2024-04-17 08:33:07.892112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.765 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.765 [2024-04-17 08:33:07.908413] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.765 [2024-04-17 08:33:07.908451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.765 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.765 [2024-04-17 08:33:07.924182] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.765 [2024-04-17 08:33:07.924260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.765 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.765 [2024-04-17 08:33:07.935964] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.765 [2024-04-17 08:33:07.936002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.765 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.765 [2024-04-17 08:33:07.951196] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.765 [2024-04-17 08:33:07.951231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.765 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.765 [2024-04-17 08:33:07.967865] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.765 [2024-04-17 08:33:07.967901] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.765 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.765 [2024-04-17 08:33:07.980453] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.765 [2024-04-17 08:33:07.980492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.765 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.765 [2024-04-17 08:33:07.992513] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.765 [2024-04-17 08:33:07.992548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.766 2024/04/17 08:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.766 [2024-04-17 08:33:08.008244] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.766 [2024-04-17 08:33:08.008280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.766 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.766 [2024-04-17 08:33:08.025228] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.766 [2024-04-17 08:33:08.025263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.766 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.766 [2024-04-17 08:33:08.041191] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.766 [2024-04-17 08:33:08.041224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.766 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.766 [2024-04-17 08:33:08.056612] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.766 [2024-04-17 08:33:08.056643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.766 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.766 [2024-04-17 08:33:08.072917] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.766 [2024-04-17 08:33:08.072954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.766 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:34.766 [2024-04-17 08:33:08.088540] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:34.766 [2024-04-17 08:33:08.088574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.766 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.026 [2024-04-17 08:33:08.102344] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.026 [2024-04-17 08:33:08.102379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.026 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.027 [2024-04-17 08:33:08.118245] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.027 [2024-04-17 08:33:08.118283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.027 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.027 [2024-04-17 08:33:08.133822] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.027 [2024-04-17 08:33:08.133860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.027 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.027 [2024-04-17 08:33:08.145597] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.027 [2024-04-17 08:33:08.145635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.027 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.027 [2024-04-17 08:33:08.161853] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.027 [2024-04-17 08:33:08.161891] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.027 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.027 [2024-04-17 08:33:08.178000] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.027 [2024-04-17 08:33:08.178037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.027 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.027 [2024-04-17 08:33:08.194272] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.027 [2024-04-17 08:33:08.194317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.027 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.027 [2024-04-17 08:33:08.205469] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.027 [2024-04-17 08:33:08.205500] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.027 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.027 [2024-04-17 08:33:08.220219] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.027 [2024-04-17 08:33:08.220251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.027 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.027 [2024-04-17 08:33:08.231726] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.027 [2024-04-17 08:33:08.231754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.027 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.027 [2024-04-17 08:33:08.247217] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.027 [2024-04-17 08:33:08.247251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.027 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.027 [2024-04-17 08:33:08.262723] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.027 [2024-04-17 08:33:08.262756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.027 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.027 [2024-04-17 08:33:08.277362] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.027 [2024-04-17 08:33:08.277406] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.027 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.027 [2024-04-17 08:33:08.289303] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.027 [2024-04-17 08:33:08.289337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.027 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.027 [2024-04-17 08:33:08.305510] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.027 [2024-04-17 08:33:08.305543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.027 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.027 [2024-04-17 08:33:08.320860] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.027 [2024-04-17 08:33:08.320895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.027 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.027 [2024-04-17 08:33:08.335653] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.027 [2024-04-17 08:33:08.335687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.027 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.027 [2024-04-17 08:33:08.346332] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.027 [2024-04-17 08:33:08.346365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.027 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.288 [2024-04-17 08:33:08.362137] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.288 [2024-04-17 08:33:08.362170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.288 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.288 [2024-04-17 08:33:08.377247] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.288 [2024-04-17 08:33:08.377278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.288 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.288 [2024-04-17 08:33:08.392754] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.288 [2024-04-17 08:33:08.392789] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.288 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.288 [2024-04-17 08:33:08.408885] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.288 [2024-04-17 08:33:08.408919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.288 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.288 [2024-04-17 08:33:08.420085] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.288 [2024-04-17 08:33:08.420118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.288 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.288 [2024-04-17 08:33:08.435543] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.288 [2024-04-17 08:33:08.435576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.288 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.288 [2024-04-17 08:33:08.450799] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.288 [2024-04-17 08:33:08.450845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.288 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.288 [2024-04-17 08:33:08.466037] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.288 [2024-04-17 08:33:08.466075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.288 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.288 [2024-04-17 08:33:08.482162] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.288 [2024-04-17 08:33:08.482197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.288 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.288 [2024-04-17 08:33:08.493973] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.288 [2024-04-17 08:33:08.494007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.288 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.288 [2024-04-17 08:33:08.509086] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.288 [2024-04-17 08:33:08.509116] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.288 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.288 [2024-04-17 08:33:08.524915] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.288 [2024-04-17 08:33:08.524944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.288 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.288 [2024-04-17 08:33:08.539795] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.288 [2024-04-17 08:33:08.539823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.288 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.288 [2024-04-17 08:33:08.551514] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.288 [2024-04-17 08:33:08.551545] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.288 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.288 [2024-04-17 08:33:08.567738] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.288 [2024-04-17 08:33:08.567772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.288 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.288 [2024-04-17 08:33:08.583541] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.288 [2024-04-17 08:33:08.583574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.288 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.288 [2024-04-17 08:33:08.600418] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.288 [2024-04-17 08:33:08.600452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.288 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.288 [2024-04-17 08:33:08.611474] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.288 [2024-04-17 08:33:08.611505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.288 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.550 [2024-04-17 08:33:08.626722] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.550 [2024-04-17 08:33:08.626752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.550 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.550 [2024-04-17 08:33:08.642836] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.550 [2024-04-17 08:33:08.642872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.550 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.550 [2024-04-17 08:33:08.653855] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.550 [2024-04-17 08:33:08.653891] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.550 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.550 [2024-04-17 08:33:08.668693] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.550 [2024-04-17 08:33:08.668728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.550 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.550 [2024-04-17 08:33:08.679718] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.550 [2024-04-17 08:33:08.679752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.550 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.550 [2024-04-17 08:33:08.694683] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.550 [2024-04-17 08:33:08.694718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.550 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.550 [2024-04-17 08:33:08.710234] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.550 [2024-04-17 08:33:08.710271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.550 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.550 [2024-04-17 08:33:08.725377] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.550 [2024-04-17 08:33:08.725426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.550 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.550 [2024-04-17 08:33:08.741129] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.550 [2024-04-17 08:33:08.741168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.550 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.550 [2024-04-17 08:33:08.756274] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.550 [2024-04-17 08:33:08.756316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.550 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.550 [2024-04-17 08:33:08.772460] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.550 [2024-04-17 08:33:08.772498] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.550 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.550 [2024-04-17 08:33:08.788768] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.550 [2024-04-17 08:33:08.788806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.550 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.550 [2024-04-17 08:33:08.804073] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.550 [2024-04-17 08:33:08.804109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.550 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.550 [2024-04-17 08:33:08.818745] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.550 [2024-04-17 08:33:08.818782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.550 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.550 [2024-04-17 08:33:08.830184] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.550 [2024-04-17 08:33:08.830220] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.550 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.550 [2024-04-17 08:33:08.845103] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.550 [2024-04-17 08:33:08.845137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.550 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.550 [2024-04-17 08:33:08.856534] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.550 [2024-04-17 08:33:08.856565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.550 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.550 [2024-04-17 08:33:08.873151] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.550 [2024-04-17 08:33:08.873188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.550 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.809 [2024-04-17 08:33:08.888051] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.809 [2024-04-17 08:33:08.888090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.809 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.809 [2024-04-17 08:33:08.903710] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.809 [2024-04-17 08:33:08.903751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.809 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.809 [2024-04-17 08:33:08.919787] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.809 [2024-04-17 08:33:08.919831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.809 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.809 [2024-04-17 08:33:08.934869] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.809 [2024-04-17 08:33:08.934913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.809 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.809 [2024-04-17 08:33:08.950701] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.809 [2024-04-17 08:33:08.950745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.810 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.810 [2024-04-17 08:33:08.966150] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.810 [2024-04-17 08:33:08.966190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.810 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.810 [2024-04-17 08:33:08.981927] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.810 [2024-04-17 08:33:08.981964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.810 2024/04/17 08:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.810 [2024-04-17 08:33:08.997254] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.810 [2024-04-17 08:33:08.997291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.810 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.810 [2024-04-17 08:33:09.013363] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.810 [2024-04-17 08:33:09.013406] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.810 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.810 [2024-04-17 08:33:09.025427] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.810 [2024-04-17 08:33:09.025463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.810 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.810 [2024-04-17 08:33:09.041056] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.810 [2024-04-17 08:33:09.041094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.810 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.810 [2024-04-17 08:33:09.056660] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.810 [2024-04-17 08:33:09.056695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.810 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.810 [2024-04-17 08:33:09.065323] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.810 [2024-04-17 08:33:09.065358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.810 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.810 [2024-04-17 08:33:09.074295] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.810 [2024-04-17 08:33:09.074330] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.810 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.810 [2024-04-17 08:33:09.082999] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.810 [2024-04-17 08:33:09.083034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.810 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.810 [2024-04-17 08:33:09.091600] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.810 [2024-04-17 08:33:09.091631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.810 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.810 [2024-04-17 08:33:09.100060] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.810 [2024-04-17 08:33:09.100091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.810 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.810 [2024-04-17 08:33:09.108763] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.810 [2024-04-17 08:33:09.108796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.810 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.810 [2024-04-17 08:33:09.117470] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.810 [2024-04-17 08:33:09.117501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.810 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.810 [2024-04-17 08:33:09.126096] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.810 [2024-04-17 08:33:09.126127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.810 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:35.810 [2024-04-17 08:33:09.134727] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:35.810 [2024-04-17 08:33:09.134757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:35.810 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.070 [2024-04-17 08:33:09.144717] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.070 [2024-04-17 08:33:09.144746] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.070 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.070 [2024-04-17 08:33:09.154585] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.070 [2024-04-17 08:33:09.154616] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.070 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.070 [2024-04-17 08:33:09.164182] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.070 [2024-04-17 08:33:09.164212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.070 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.070 [2024-04-17 08:33:09.171701] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.070 [2024-04-17 08:33:09.171731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.070 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.070 [2024-04-17 08:33:09.183608] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.070 [2024-04-17 08:33:09.183645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.070 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.070 [2024-04-17 08:33:09.193917] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.070 [2024-04-17 08:33:09.193954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.070 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.070 [2024-04-17 08:33:09.210194] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.070 [2024-04-17 08:33:09.210230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.070 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.070 [2024-04-17 08:33:09.226454] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.070 [2024-04-17 08:33:09.226488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.070 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.070 [2024-04-17 08:33:09.242493] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.070 [2024-04-17 08:33:09.242532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.070 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.070 [2024-04-17 08:33:09.251222] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.070 [2024-04-17 08:33:09.251258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.070 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.070 [2024-04-17 08:33:09.267355] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.070 [2024-04-17 08:33:09.267413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.070 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.070 [2024-04-17 08:33:09.283782] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.070 [2024-04-17 08:33:09.283833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.070 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.070 [2024-04-17 08:33:09.300256] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.070 [2024-04-17 08:33:09.300302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.070 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.070 [2024-04-17 08:33:09.309099] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.070 [2024-04-17 08:33:09.309141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.070 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.070 [2024-04-17 08:33:09.325362] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.070 [2024-04-17 08:33:09.325417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.070 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.070 [2024-04-17 08:33:09.336996] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.070 [2024-04-17 08:33:09.337038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.070 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.070 [2024-04-17 08:33:09.345102] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.070 [2024-04-17 08:33:09.345139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.070 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.070 [2024-04-17 08:33:09.356796] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.070 [2024-04-17 08:33:09.356829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.070 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.070 [2024-04-17 08:33:09.365146] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.070 [2024-04-17 08:33:09.365176] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.070 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.070 [2024-04-17 08:33:09.375614] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.070 [2024-04-17 08:33:09.375649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.070 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.070 [2024-04-17 08:33:09.384121] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.070 [2024-04-17 08:33:09.384154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.070 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.070 [2024-04-17 08:33:09.394456] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.070 [2024-04-17 08:33:09.394491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.070 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.329 [2024-04-17 08:33:09.403025] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.329 [2024-04-17 08:33:09.403067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.329 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.329 [2024-04-17 08:33:09.411830] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.329 [2024-04-17 08:33:09.411869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.330 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.330 [2024-04-17 08:33:09.425668] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.330 [2024-04-17 08:33:09.425708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.330 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.330 [2024-04-17 08:33:09.440708] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.330 [2024-04-17 08:33:09.440757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.330 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.330 [2024-04-17 08:33:09.452776] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.330 [2024-04-17 08:33:09.452821] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.330 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.330 [2024-04-17 08:33:09.469325] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.330 [2024-04-17 08:33:09.469377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.330 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.330 [2024-04-17 08:33:09.485623] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.330 [2024-04-17 08:33:09.485672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.330 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.330 [2024-04-17 08:33:09.501669] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.330 [2024-04-17 08:33:09.501715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.330 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.330 [2024-04-17 08:33:09.513357] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.330 [2024-04-17 08:33:09.513415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.330 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.330 [2024-04-17 08:33:09.529251] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.330 [2024-04-17 08:33:09.529300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.330 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.330 [2024-04-17 08:33:09.545772] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.330 [2024-04-17 08:33:09.545822] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.330 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.330 00:36:36.330 Latency(us) 00:36:36.330 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:36.330 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:36:36.330 Nvme1n1 : 5.01 14539.24 113.59 0.00 0.00 8795.16 3648.84 21177.57 00:36:36.330 =================================================================================================================== 00:36:36.330 Total : 14539.24 113.59 0.00 0.00 8795.16 3648.84 21177.57 00:36:36.330 [2024-04-17 08:33:09.557431] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.330 [2024-04-17 08:33:09.557470] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.330 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.330 [2024-04-17 08:33:09.569435] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.330 [2024-04-17 08:33:09.569476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.330 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.330 [2024-04-17 08:33:09.581426] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.330 [2024-04-17 08:33:09.581475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.330 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.330 [2024-04-17 08:33:09.593409] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.330 [2024-04-17 08:33:09.593464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.330 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.330 [2024-04-17 08:33:09.605380] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.330 [2024-04-17 08:33:09.605439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.330 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.330 [2024-04-17 08:33:09.617360] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.330 [2024-04-17 08:33:09.617422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.330 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.330 [2024-04-17 08:33:09.629337] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.330 [2024-04-17 08:33:09.629387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.330 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.330 [2024-04-17 08:33:09.641320] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.330 [2024-04-17 08:33:09.641371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.330 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.330 [2024-04-17 08:33:09.653300] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.330 [2024-04-17 08:33:09.653350] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.330 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.589 [2024-04-17 08:33:09.665256] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.589 [2024-04-17 08:33:09.665296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.589 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.589 [2024-04-17 08:33:09.673224] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.589 [2024-04-17 08:33:09.673255] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.589 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.589 [2024-04-17 08:33:09.685206] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.589 [2024-04-17 08:33:09.685241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.589 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.589 [2024-04-17 08:33:09.693199] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.589 [2024-04-17 08:33:09.693234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.589 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.589 [2024-04-17 08:33:09.705204] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.589 [2024-04-17 08:33:09.705249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.589 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.589 [2024-04-17 08:33:09.717166] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.589 [2024-04-17 08:33:09.717203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.589 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.589 [2024-04-17 08:33:09.725142] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.589 [2024-04-17 08:33:09.725175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.589 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.589 [2024-04-17 08:33:09.733125] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.589 [2024-04-17 08:33:09.733157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.589 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.589 [2024-04-17 08:33:09.741123] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.589 [2024-04-17 08:33:09.741155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.589 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.589 [2024-04-17 08:33:09.753124] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.589 [2024-04-17 08:33:09.753173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.589 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.589 [2024-04-17 08:33:09.765091] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.589 [2024-04-17 08:33:09.765131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.589 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.589 [2024-04-17 08:33:09.777079] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:36.589 [2024-04-17 08:33:09.777120] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:36.589 2024/04/17 08:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:36.589 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (74031) - No such process 00:36:36.589 08:33:09 -- target/zcopy.sh@49 -- # wait 74031 00:36:36.589 08:33:09 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:36.589 08:33:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:36.589 08:33:09 -- common/autotest_common.sh@10 -- # set +x 00:36:36.589 08:33:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:36.589 08:33:09 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:36.589 08:33:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:36.589 08:33:09 -- common/autotest_common.sh@10 -- # set +x 00:36:36.589 delay0 00:36:36.589 08:33:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:36.589 08:33:09 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:36:36.589 08:33:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:36.589 08:33:09 -- common/autotest_common.sh@10 -- # set +x 00:36:36.589 08:33:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:36.589 08:33:09 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:36:36.847 [2024-04-17 08:33:09.989941] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:36:43.434 Initializing NVMe Controllers 00:36:43.435 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:43.435 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:43.435 Initialization complete. Launching workers. 00:36:43.435 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 68 00:36:43.435 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 355, failed to submit 33 00:36:43.435 success 176, unsuccess 179, failed 0 00:36:43.435 08:33:16 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:36:43.435 08:33:16 -- target/zcopy.sh@60 -- # nvmftestfini 00:36:43.435 08:33:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:36:43.435 08:33:16 -- nvmf/common.sh@116 -- # sync 00:36:43.435 08:33:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:36:43.435 08:33:16 -- nvmf/common.sh@119 -- # set +e 00:36:43.435 08:33:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:36:43.435 08:33:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:36:43.435 rmmod nvme_tcp 00:36:43.435 rmmod nvme_fabrics 00:36:43.435 rmmod nvme_keyring 00:36:43.435 08:33:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:36:43.435 08:33:16 -- nvmf/common.sh@123 -- # set -e 00:36:43.435 08:33:16 -- nvmf/common.sh@124 -- # return 0 00:36:43.435 08:33:16 -- nvmf/common.sh@477 -- # '[' -n 73858 ']' 00:36:43.435 08:33:16 -- nvmf/common.sh@478 -- # killprocess 73858 00:36:43.435 08:33:16 -- common/autotest_common.sh@926 -- # '[' -z 73858 ']' 00:36:43.435 08:33:16 -- common/autotest_common.sh@930 -- # kill -0 73858 00:36:43.435 08:33:16 -- common/autotest_common.sh@931 -- # uname 00:36:43.435 08:33:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:36:43.435 08:33:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73858 00:36:43.435 08:33:16 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:36:43.435 08:33:16 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:36:43.435 08:33:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73858' 00:36:43.435 killing process with pid 73858 00:36:43.435 08:33:16 -- common/autotest_common.sh@945 -- # kill 73858 00:36:43.435 08:33:16 -- common/autotest_common.sh@950 -- # wait 73858 00:36:43.435 08:33:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:36:43.435 08:33:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:36:43.435 08:33:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:36:43.435 08:33:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:43.435 08:33:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:36:43.435 08:33:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:43.435 08:33:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:43.435 08:33:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:43.435 08:33:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:36:43.692 00:36:43.692 real 0m24.914s 00:36:43.692 user 0m41.347s 00:36:43.692 sys 0m5.422s 00:36:43.692 08:33:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:43.692 08:33:16 -- common/autotest_common.sh@10 -- # set +x 00:36:43.692 ************************************ 00:36:43.692 END TEST nvmf_zcopy 00:36:43.692 ************************************ 00:36:43.692 08:33:16 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:36:43.692 08:33:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:36:43.692 08:33:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:36:43.692 08:33:16 -- common/autotest_common.sh@10 -- # set +x 00:36:43.692 ************************************ 00:36:43.692 START TEST nvmf_nmic 00:36:43.692 ************************************ 00:36:43.692 08:33:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:36:43.692 * Looking for test storage... 00:36:43.692 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:36:43.692 08:33:16 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:43.692 08:33:16 -- nvmf/common.sh@7 -- # uname -s 00:36:43.692 08:33:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:43.692 08:33:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:43.692 08:33:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:43.692 08:33:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:43.692 08:33:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:43.692 08:33:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:43.692 08:33:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:43.692 08:33:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:43.693 08:33:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:43.693 08:33:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:43.693 08:33:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:36:43.693 08:33:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:36:43.693 08:33:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:43.693 08:33:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:43.693 08:33:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:43.693 08:33:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:43.693 08:33:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:43.693 08:33:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:43.693 08:33:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:43.693 08:33:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:43.693 08:33:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:43.693 08:33:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:43.693 08:33:16 -- paths/export.sh@5 -- # export PATH 00:36:43.693 08:33:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:43.693 08:33:16 -- nvmf/common.sh@46 -- # : 0 00:36:43.693 08:33:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:36:43.693 08:33:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:36:43.693 08:33:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:36:43.693 08:33:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:43.693 08:33:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:43.693 08:33:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:36:43.693 08:33:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:36:43.693 08:33:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:36:43.693 08:33:16 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:43.693 08:33:16 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:43.693 08:33:16 -- target/nmic.sh@14 -- # nvmftestinit 00:36:43.693 08:33:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:36:43.693 08:33:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:43.693 08:33:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:36:43.693 08:33:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:36:43.693 08:33:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:36:43.693 08:33:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:43.693 08:33:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:43.693 08:33:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:43.693 08:33:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:36:43.693 08:33:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:36:43.693 08:33:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:36:43.693 08:33:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:36:43.693 08:33:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:36:43.693 08:33:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:36:43.693 08:33:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:43.693 08:33:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:43.693 08:33:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:36:43.693 08:33:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:36:43.693 08:33:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:36:43.693 08:33:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:36:43.693 08:33:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:36:43.693 08:33:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:43.693 08:33:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:36:43.693 08:33:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:36:43.693 08:33:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:36:43.693 08:33:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:36:43.693 08:33:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:36:43.693 08:33:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:36:43.693 Cannot find device "nvmf_tgt_br" 00:36:43.693 08:33:16 -- nvmf/common.sh@154 -- # true 00:36:43.693 08:33:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:36:43.693 Cannot find device "nvmf_tgt_br2" 00:36:43.693 08:33:17 -- nvmf/common.sh@155 -- # true 00:36:43.693 08:33:17 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:36:43.693 08:33:17 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:36:43.952 Cannot find device "nvmf_tgt_br" 00:36:43.952 08:33:17 -- nvmf/common.sh@157 -- # true 00:36:43.952 08:33:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:36:43.952 Cannot find device "nvmf_tgt_br2" 00:36:43.952 08:33:17 -- nvmf/common.sh@158 -- # true 00:36:43.952 08:33:17 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:36:43.952 08:33:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:36:43.952 08:33:17 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:43.952 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:43.952 08:33:17 -- nvmf/common.sh@161 -- # true 00:36:43.952 08:33:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:43.952 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:43.952 08:33:17 -- nvmf/common.sh@162 -- # true 00:36:43.952 08:33:17 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:36:43.952 08:33:17 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:36:43.952 08:33:17 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:36:43.952 08:33:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:36:43.952 08:33:17 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:36:43.952 08:33:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:36:43.952 08:33:17 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:36:43.952 08:33:17 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:36:43.952 08:33:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:36:43.952 08:33:17 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:36:43.952 08:33:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:36:43.952 08:33:17 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:36:43.952 08:33:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:36:43.952 08:33:17 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:36:43.952 08:33:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:36:43.952 08:33:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:36:43.952 08:33:17 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:36:43.952 08:33:17 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:36:43.952 08:33:17 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:36:43.952 08:33:17 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:43.952 08:33:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:44.210 08:33:17 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:44.210 08:33:17 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:44.210 08:33:17 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:36:44.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:44.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:36:44.210 00:36:44.210 --- 10.0.0.2 ping statistics --- 00:36:44.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:44.210 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:36:44.210 08:33:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:36:44.210 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:44.210 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:36:44.210 00:36:44.210 --- 10.0.0.3 ping statistics --- 00:36:44.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:44.210 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:36:44.210 08:33:17 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:44.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:44.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:36:44.210 00:36:44.210 --- 10.0.0.1 ping statistics --- 00:36:44.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:44.210 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:36:44.210 08:33:17 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:44.210 08:33:17 -- nvmf/common.sh@421 -- # return 0 00:36:44.210 08:33:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:36:44.210 08:33:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:44.210 08:33:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:36:44.210 08:33:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:36:44.210 08:33:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:44.210 08:33:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:36:44.210 08:33:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:36:44.210 08:33:17 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:36:44.211 08:33:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:36:44.211 08:33:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:36:44.211 08:33:17 -- common/autotest_common.sh@10 -- # set +x 00:36:44.211 08:33:17 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:36:44.211 08:33:17 -- nvmf/common.sh@469 -- # nvmfpid=74353 00:36:44.211 08:33:17 -- nvmf/common.sh@470 -- # waitforlisten 74353 00:36:44.211 08:33:17 -- common/autotest_common.sh@819 -- # '[' -z 74353 ']' 00:36:44.211 08:33:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:44.211 08:33:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:36:44.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:44.211 08:33:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:44.211 08:33:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:36:44.211 08:33:17 -- common/autotest_common.sh@10 -- # set +x 00:36:44.211 [2024-04-17 08:33:17.405792] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:36:44.211 [2024-04-17 08:33:17.405891] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:44.469 [2024-04-17 08:33:17.552981] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:44.469 [2024-04-17 08:33:17.659725] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:36:44.469 [2024-04-17 08:33:17.659869] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:44.469 [2024-04-17 08:33:17.659877] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:44.469 [2024-04-17 08:33:17.659884] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:44.470 [2024-04-17 08:33:17.659960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:44.470 [2024-04-17 08:33:17.660210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:44.470 [2024-04-17 08:33:17.660242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:44.470 [2024-04-17 08:33:17.660250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:45.035 08:33:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:36:45.035 08:33:18 -- common/autotest_common.sh@852 -- # return 0 00:36:45.035 08:33:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:36:45.035 08:33:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:36:45.035 08:33:18 -- common/autotest_common.sh@10 -- # set +x 00:36:45.035 08:33:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:45.035 08:33:18 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:45.035 08:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:45.035 08:33:18 -- common/autotest_common.sh@10 -- # set +x 00:36:45.035 [2024-04-17 08:33:18.333629] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:45.035 08:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:45.035 08:33:18 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:45.035 08:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:45.035 08:33:18 -- common/autotest_common.sh@10 -- # set +x 00:36:45.293 Malloc0 00:36:45.293 08:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:45.293 08:33:18 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:45.293 08:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:45.293 08:33:18 -- common/autotest_common.sh@10 -- # set +x 00:36:45.293 08:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:45.293 08:33:18 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:45.293 08:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:45.293 08:33:18 -- common/autotest_common.sh@10 -- # set +x 00:36:45.293 08:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:45.293 08:33:18 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:45.293 08:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:45.293 08:33:18 -- common/autotest_common.sh@10 -- # set +x 00:36:45.293 [2024-04-17 08:33:18.416575] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:45.293 08:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:45.293 08:33:18 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:36:45.293 test case1: single bdev can't be used in multiple subsystems 00:36:45.293 08:33:18 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:36:45.293 08:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:45.293 08:33:18 -- common/autotest_common.sh@10 -- # set +x 00:36:45.293 08:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:45.293 08:33:18 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:45.293 08:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:45.293 08:33:18 -- common/autotest_common.sh@10 -- # set +x 00:36:45.293 08:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:45.293 08:33:18 -- target/nmic.sh@28 -- # nmic_status=0 00:36:45.293 08:33:18 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:36:45.293 08:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:45.293 08:33:18 -- common/autotest_common.sh@10 -- # set +x 00:36:45.293 [2024-04-17 08:33:18.448403] bdev.c:7935:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:36:45.293 [2024-04-17 08:33:18.448447] subsystem.c:1779:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:36:45.293 [2024-04-17 08:33:18.448455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:45.293 2024/04/17 08:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:36:45.293 request: 00:36:45.293 { 00:36:45.293 "method": "nvmf_subsystem_add_ns", 00:36:45.293 "params": { 00:36:45.294 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:36:45.294 "namespace": { 00:36:45.294 "bdev_name": "Malloc0" 00:36:45.294 } 00:36:45.294 } 00:36:45.294 } 00:36:45.294 Got JSON-RPC error response 00:36:45.294 GoRPCClient: error on JSON-RPC call 00:36:45.294 08:33:18 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:36:45.294 08:33:18 -- target/nmic.sh@29 -- # nmic_status=1 00:36:45.294 08:33:18 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:36:45.294 08:33:18 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:36:45.294 Adding namespace failed - expected result. 00:36:45.294 08:33:18 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:36:45.294 test case2: host connect to nvmf target in multiple paths 00:36:45.294 08:33:18 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:36:45.294 08:33:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:45.294 08:33:18 -- common/autotest_common.sh@10 -- # set +x 00:36:45.294 [2024-04-17 08:33:18.460516] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:36:45.294 08:33:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:45.294 08:33:18 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:45.551 08:33:18 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:36:45.551 08:33:18 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:36:45.551 08:33:18 -- common/autotest_common.sh@1177 -- # local i=0 00:36:45.551 08:33:18 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:36:45.551 08:33:18 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:36:45.551 08:33:18 -- common/autotest_common.sh@1184 -- # sleep 2 00:36:48.077 08:33:20 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:36:48.078 08:33:20 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:36:48.078 08:33:20 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:36:48.078 08:33:20 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:36:48.078 08:33:20 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:36:48.078 08:33:20 -- common/autotest_common.sh@1187 -- # return 0 00:36:48.078 08:33:20 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:36:48.078 [global] 00:36:48.078 thread=1 00:36:48.078 invalidate=1 00:36:48.078 rw=write 00:36:48.078 time_based=1 00:36:48.078 runtime=1 00:36:48.078 ioengine=libaio 00:36:48.078 direct=1 00:36:48.078 bs=4096 00:36:48.078 iodepth=1 00:36:48.078 norandommap=0 00:36:48.078 numjobs=1 00:36:48.078 00:36:48.078 verify_dump=1 00:36:48.078 verify_backlog=512 00:36:48.078 verify_state_save=0 00:36:48.078 do_verify=1 00:36:48.078 verify=crc32c-intel 00:36:48.078 [job0] 00:36:48.078 filename=/dev/nvme0n1 00:36:48.078 Could not set queue depth (nvme0n1) 00:36:48.078 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:48.078 fio-3.35 00:36:48.078 Starting 1 thread 00:36:49.010 00:36:49.010 job0: (groupid=0, jobs=1): err= 0: pid=74458: Wed Apr 17 08:33:22 2024 00:36:49.010 read: IOPS=4197, BW=16.4MiB/s (17.2MB/s)(16.4MiB/1000msec) 00:36:49.010 slat (nsec): min=8352, max=43604, avg=10266.32, stdev=2080.64 00:36:49.010 clat (usec): min=96, max=183, avg=113.46, stdev= 7.52 00:36:49.010 lat (usec): min=106, max=203, avg=123.72, stdev= 8.07 00:36:49.010 clat percentiles (usec): 00:36:49.010 | 1.00th=[ 101], 5.00th=[ 103], 10.00th=[ 105], 20.00th=[ 108], 00:36:49.010 | 30.00th=[ 110], 40.00th=[ 112], 50.00th=[ 113], 60.00th=[ 115], 00:36:49.010 | 70.00th=[ 117], 80.00th=[ 120], 90.00th=[ 124], 95.00th=[ 128], 00:36:49.010 | 99.00th=[ 139], 99.50th=[ 141], 99.90th=[ 149], 99.95th=[ 153], 00:36:49.010 | 99.99th=[ 184] 00:36:49.010 write: IOPS=4608, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1000msec); 0 zone resets 00:36:49.010 slat (usec): min=12, max=130, avg=16.68, stdev= 7.50 00:36:49.010 clat (usec): min=72, max=565, avg=85.33, stdev=10.84 00:36:49.010 lat (usec): min=86, max=584, avg=102.01, stdev=14.97 00:36:49.010 clat percentiles (usec): 00:36:49.010 | 1.00th=[ 75], 5.00th=[ 77], 10.00th=[ 78], 20.00th=[ 80], 00:36:49.010 | 30.00th=[ 81], 40.00th=[ 83], 50.00th=[ 84], 60.00th=[ 86], 00:36:49.010 | 70.00th=[ 87], 80.00th=[ 90], 90.00th=[ 95], 95.00th=[ 100], 00:36:49.010 | 99.00th=[ 115], 99.50th=[ 121], 99.90th=[ 143], 99.95th=[ 188], 00:36:49.010 | 99.99th=[ 570] 00:36:49.010 bw ( KiB/s): min=18160, max=18160, per=98.52%, avg=18160.00, stdev= 0.00, samples=1 00:36:49.010 iops : min= 4540, max= 4540, avg=4540.00, stdev= 0.00, samples=1 00:36:49.010 lat (usec) : 100=50.05%, 250=49.94%, 750=0.01% 00:36:49.010 cpu : usr=1.30%, sys=9.30%, ctx=8806, majf=0, minf=2 00:36:49.010 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:49.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:49.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:49.010 issued rwts: total=4197,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:49.010 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:49.010 00:36:49.010 Run status group 0 (all jobs): 00:36:49.010 READ: bw=16.4MiB/s (17.2MB/s), 16.4MiB/s-16.4MiB/s (17.2MB/s-17.2MB/s), io=16.4MiB (17.2MB), run=1000-1000msec 00:36:49.010 WRITE: bw=18.0MiB/s (18.9MB/s), 18.0MiB/s-18.0MiB/s (18.9MB/s-18.9MB/s), io=18.0MiB (18.9MB), run=1000-1000msec 00:36:49.010 00:36:49.010 Disk stats (read/write): 00:36:49.010 nvme0n1: ios=3837/4096, merge=0/0, ticks=468/380, in_queue=848, util=91.57% 00:36:49.010 08:33:22 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:49.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:36:49.010 08:33:22 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:49.010 08:33:22 -- common/autotest_common.sh@1198 -- # local i=0 00:36:49.010 08:33:22 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:49.010 08:33:22 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:36:49.010 08:33:22 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:36:49.010 08:33:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:49.010 08:33:22 -- common/autotest_common.sh@1210 -- # return 0 00:36:49.010 08:33:22 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:49.010 08:33:22 -- target/nmic.sh@53 -- # nvmftestfini 00:36:49.010 08:33:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:36:49.010 08:33:22 -- nvmf/common.sh@116 -- # sync 00:36:49.010 08:33:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:36:49.010 08:33:22 -- nvmf/common.sh@119 -- # set +e 00:36:49.010 08:33:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:36:49.010 08:33:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:36:49.010 rmmod nvme_tcp 00:36:49.010 rmmod nvme_fabrics 00:36:49.010 rmmod nvme_keyring 00:36:49.010 08:33:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:36:49.010 08:33:22 -- nvmf/common.sh@123 -- # set -e 00:36:49.010 08:33:22 -- nvmf/common.sh@124 -- # return 0 00:36:49.010 08:33:22 -- nvmf/common.sh@477 -- # '[' -n 74353 ']' 00:36:49.010 08:33:22 -- nvmf/common.sh@478 -- # killprocess 74353 00:36:49.010 08:33:22 -- common/autotest_common.sh@926 -- # '[' -z 74353 ']' 00:36:49.010 08:33:22 -- common/autotest_common.sh@930 -- # kill -0 74353 00:36:49.010 08:33:22 -- common/autotest_common.sh@931 -- # uname 00:36:49.010 08:33:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:36:49.010 08:33:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74353 00:36:49.010 killing process with pid 74353 00:36:49.010 08:33:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:36:49.010 08:33:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:36:49.010 08:33:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74353' 00:36:49.010 08:33:22 -- common/autotest_common.sh@945 -- # kill 74353 00:36:49.010 08:33:22 -- common/autotest_common.sh@950 -- # wait 74353 00:36:49.268 08:33:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:36:49.268 08:33:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:36:49.268 08:33:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:36:49.268 08:33:22 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:49.268 08:33:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:36:49.268 08:33:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:49.268 08:33:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:49.268 08:33:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:49.268 08:33:22 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:36:49.268 00:36:49.268 real 0m5.759s 00:36:49.268 user 0m19.240s 00:36:49.268 sys 0m1.128s 00:36:49.268 08:33:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:49.268 ************************************ 00:36:49.268 END TEST nvmf_nmic 00:36:49.268 ************************************ 00:36:49.268 08:33:22 -- common/autotest_common.sh@10 -- # set +x 00:36:49.525 08:33:22 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:36:49.525 08:33:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:36:49.525 08:33:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:36:49.525 08:33:22 -- common/autotest_common.sh@10 -- # set +x 00:36:49.525 ************************************ 00:36:49.525 START TEST nvmf_fio_target 00:36:49.525 ************************************ 00:36:49.525 08:33:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:36:49.525 * Looking for test storage... 00:36:49.525 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:36:49.525 08:33:22 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:49.525 08:33:22 -- nvmf/common.sh@7 -- # uname -s 00:36:49.525 08:33:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:49.525 08:33:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:49.525 08:33:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:49.525 08:33:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:49.526 08:33:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:49.526 08:33:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:49.526 08:33:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:49.526 08:33:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:49.526 08:33:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:49.526 08:33:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:49.526 08:33:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:36:49.526 08:33:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:36:49.526 08:33:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:49.526 08:33:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:49.526 08:33:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:49.526 08:33:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:49.526 08:33:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:49.526 08:33:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:49.526 08:33:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:49.526 08:33:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:49.526 08:33:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:49.526 08:33:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:49.526 08:33:22 -- paths/export.sh@5 -- # export PATH 00:36:49.526 08:33:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:49.526 08:33:22 -- nvmf/common.sh@46 -- # : 0 00:36:49.526 08:33:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:36:49.526 08:33:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:36:49.526 08:33:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:36:49.526 08:33:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:49.526 08:33:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:49.526 08:33:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:36:49.526 08:33:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:36:49.526 08:33:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:36:49.526 08:33:22 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:49.526 08:33:22 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:49.526 08:33:22 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:49.526 08:33:22 -- target/fio.sh@16 -- # nvmftestinit 00:36:49.526 08:33:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:36:49.526 08:33:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:49.526 08:33:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:36:49.526 08:33:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:36:49.526 08:33:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:36:49.526 08:33:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:49.526 08:33:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:49.526 08:33:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:49.526 08:33:22 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:36:49.526 08:33:22 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:36:49.526 08:33:22 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:36:49.526 08:33:22 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:36:49.526 08:33:22 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:36:49.526 08:33:22 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:36:49.526 08:33:22 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:49.526 08:33:22 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:49.526 08:33:22 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:36:49.526 08:33:22 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:36:49.526 08:33:22 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:36:49.526 08:33:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:36:49.526 08:33:22 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:36:49.526 08:33:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:49.526 08:33:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:36:49.526 08:33:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:36:49.526 08:33:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:36:49.526 08:33:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:36:49.526 08:33:22 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:36:49.526 08:33:22 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:36:49.526 Cannot find device "nvmf_tgt_br" 00:36:49.526 08:33:22 -- nvmf/common.sh@154 -- # true 00:36:49.526 08:33:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:36:49.783 Cannot find device "nvmf_tgt_br2" 00:36:49.783 08:33:22 -- nvmf/common.sh@155 -- # true 00:36:49.783 08:33:22 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:36:49.783 08:33:22 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:36:49.783 Cannot find device "nvmf_tgt_br" 00:36:49.783 08:33:22 -- nvmf/common.sh@157 -- # true 00:36:49.783 08:33:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:36:49.783 Cannot find device "nvmf_tgt_br2" 00:36:49.783 08:33:22 -- nvmf/common.sh@158 -- # true 00:36:49.783 08:33:22 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:36:49.783 08:33:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:36:49.783 08:33:22 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:49.783 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:49.783 08:33:22 -- nvmf/common.sh@161 -- # true 00:36:49.783 08:33:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:49.783 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:49.783 08:33:22 -- nvmf/common.sh@162 -- # true 00:36:49.783 08:33:22 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:36:49.783 08:33:22 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:36:49.783 08:33:22 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:36:49.783 08:33:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:36:49.783 08:33:23 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:36:49.783 08:33:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:36:49.783 08:33:23 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:36:49.783 08:33:23 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:36:49.783 08:33:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:36:49.783 08:33:23 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:36:49.783 08:33:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:36:49.783 08:33:23 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:36:49.783 08:33:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:36:49.783 08:33:23 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:36:49.783 08:33:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:36:49.783 08:33:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:36:49.783 08:33:23 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:36:49.783 08:33:23 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:36:49.783 08:33:23 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:36:50.041 08:33:23 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:50.041 08:33:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:50.041 08:33:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:50.041 08:33:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:50.041 08:33:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:36:50.041 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:50.041 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:36:50.041 00:36:50.041 --- 10.0.0.2 ping statistics --- 00:36:50.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:50.041 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:36:50.041 08:33:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:36:50.041 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:50.041 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.120 ms 00:36:50.041 00:36:50.041 --- 10.0.0.3 ping statistics --- 00:36:50.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:50.041 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:36:50.041 08:33:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:50.041 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:50.041 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:36:50.041 00:36:50.041 --- 10.0.0.1 ping statistics --- 00:36:50.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:50.041 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:36:50.041 08:33:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:50.041 08:33:23 -- nvmf/common.sh@421 -- # return 0 00:36:50.041 08:33:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:36:50.041 08:33:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:50.041 08:33:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:36:50.041 08:33:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:36:50.041 08:33:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:50.041 08:33:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:36:50.041 08:33:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:36:50.041 08:33:23 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:36:50.041 08:33:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:36:50.041 08:33:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:36:50.041 08:33:23 -- common/autotest_common.sh@10 -- # set +x 00:36:50.041 08:33:23 -- nvmf/common.sh@469 -- # nvmfpid=74643 00:36:50.041 08:33:23 -- nvmf/common.sh@470 -- # waitforlisten 74643 00:36:50.041 08:33:23 -- common/autotest_common.sh@819 -- # '[' -z 74643 ']' 00:36:50.041 08:33:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:50.041 08:33:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:36:50.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:50.041 08:33:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:50.041 08:33:23 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:36:50.041 08:33:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:36:50.041 08:33:23 -- common/autotest_common.sh@10 -- # set +x 00:36:50.041 [2024-04-17 08:33:23.282236] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:36:50.041 [2024-04-17 08:33:23.282327] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:50.298 [2024-04-17 08:33:23.427450] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:50.298 [2024-04-17 08:33:23.543672] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:36:50.298 [2024-04-17 08:33:23.543915] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:50.298 [2024-04-17 08:33:23.543933] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:50.299 [2024-04-17 08:33:23.543942] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:50.299 [2024-04-17 08:33:23.544057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:50.299 [2024-04-17 08:33:23.544255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:50.299 [2024-04-17 08:33:23.544188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:50.299 [2024-04-17 08:33:23.544258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:51.231 08:33:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:36:51.231 08:33:24 -- common/autotest_common.sh@852 -- # return 0 00:36:51.231 08:33:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:36:51.231 08:33:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:36:51.231 08:33:24 -- common/autotest_common.sh@10 -- # set +x 00:36:51.231 08:33:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:51.231 08:33:24 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:51.231 [2024-04-17 08:33:24.483581] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:51.231 08:33:24 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:51.490 08:33:24 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:36:51.490 08:33:24 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:51.748 08:33:25 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:36:51.748 08:33:25 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:52.008 08:33:25 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:36:52.008 08:33:25 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:52.267 08:33:25 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:36:52.267 08:33:25 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:36:52.526 08:33:25 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:52.785 08:33:25 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:36:52.785 08:33:25 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:53.044 08:33:26 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:36:53.044 08:33:26 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:53.302 08:33:26 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:36:53.302 08:33:26 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:36:53.576 08:33:26 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:53.853 08:33:26 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:36:53.853 08:33:26 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:53.853 08:33:27 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:36:53.853 08:33:27 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:36:54.111 08:33:27 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:54.369 [2024-04-17 08:33:27.585429] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:54.369 08:33:27 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:36:54.627 08:33:27 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:36:54.885 08:33:28 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:54.885 08:33:28 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:36:54.885 08:33:28 -- common/autotest_common.sh@1177 -- # local i=0 00:36:54.885 08:33:28 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:36:54.885 08:33:28 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:36:54.885 08:33:28 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:36:54.885 08:33:28 -- common/autotest_common.sh@1184 -- # sleep 2 00:36:57.418 08:33:30 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:36:57.418 08:33:30 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:36:57.418 08:33:30 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:36:57.418 08:33:30 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:36:57.418 08:33:30 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:36:57.418 08:33:30 -- common/autotest_common.sh@1187 -- # return 0 00:36:57.418 08:33:30 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:36:57.418 [global] 00:36:57.418 thread=1 00:36:57.418 invalidate=1 00:36:57.418 rw=write 00:36:57.418 time_based=1 00:36:57.418 runtime=1 00:36:57.418 ioengine=libaio 00:36:57.418 direct=1 00:36:57.418 bs=4096 00:36:57.418 iodepth=1 00:36:57.418 norandommap=0 00:36:57.418 numjobs=1 00:36:57.418 00:36:57.418 verify_dump=1 00:36:57.418 verify_backlog=512 00:36:57.418 verify_state_save=0 00:36:57.418 do_verify=1 00:36:57.418 verify=crc32c-intel 00:36:57.418 [job0] 00:36:57.418 filename=/dev/nvme0n1 00:36:57.418 [job1] 00:36:57.418 filename=/dev/nvme0n2 00:36:57.418 [job2] 00:36:57.418 filename=/dev/nvme0n3 00:36:57.418 [job3] 00:36:57.418 filename=/dev/nvme0n4 00:36:57.418 Could not set queue depth (nvme0n1) 00:36:57.418 Could not set queue depth (nvme0n2) 00:36:57.418 Could not set queue depth (nvme0n3) 00:36:57.418 Could not set queue depth (nvme0n4) 00:36:57.418 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:57.418 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:57.418 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:57.418 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:57.418 fio-3.35 00:36:57.418 Starting 4 threads 00:36:58.365 00:36:58.365 job0: (groupid=0, jobs=1): err= 0: pid=74930: Wed Apr 17 08:33:31 2024 00:36:58.365 read: IOPS=1790, BW=7161KiB/s (7333kB/s)(7168KiB/1001msec) 00:36:58.365 slat (nsec): min=6373, max=46687, avg=9987.76, stdev=2668.00 00:36:58.365 clat (usec): min=130, max=712, avg=301.27, stdev=80.27 00:36:58.365 lat (usec): min=140, max=719, avg=311.26, stdev=81.28 00:36:58.365 clat percentiles (usec): 00:36:58.365 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 196], 20.00th=[ 235], 00:36:58.365 | 30.00th=[ 243], 40.00th=[ 253], 50.00th=[ 297], 60.00th=[ 310], 00:36:58.365 | 70.00th=[ 330], 80.00th=[ 404], 90.00th=[ 416], 95.00th=[ 424], 00:36:58.365 | 99.00th=[ 441], 99.50th=[ 506], 99.90th=[ 619], 99.95th=[ 709], 00:36:58.365 | 99.99th=[ 709] 00:36:58.365 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:36:58.365 slat (usec): min=9, max=148, avg=15.75, stdev= 6.94 00:36:58.365 clat (usec): min=107, max=292, avg=197.65, stdev=27.84 00:36:58.365 lat (usec): min=129, max=389, avg=213.40, stdev=27.48 00:36:58.365 clat percentiles (usec): 00:36:58.365 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 167], 00:36:58.365 | 30.00th=[ 184], 40.00th=[ 194], 50.00th=[ 200], 60.00th=[ 206], 00:36:58.365 | 70.00th=[ 215], 80.00th=[ 223], 90.00th=[ 233], 95.00th=[ 241], 00:36:58.365 | 99.00th=[ 258], 99.50th=[ 262], 99.90th=[ 273], 99.95th=[ 285], 00:36:58.365 | 99.99th=[ 293] 00:36:58.365 bw ( KiB/s): min= 8814, max= 8814, per=26.93%, avg=8814.00, stdev= 0.00, samples=1 00:36:58.365 iops : min= 2203, max= 2203, avg=2203.00, stdev= 0.00, samples=1 00:36:58.365 lat (usec) : 250=69.90%, 500=29.87%, 750=0.23% 00:36:58.365 cpu : usr=1.10%, sys=3.80%, ctx=3841, majf=0, minf=7 00:36:58.365 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:58.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:58.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:58.365 issued rwts: total=1792,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:58.365 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:58.365 job1: (groupid=0, jobs=1): err= 0: pid=74931: Wed Apr 17 08:33:31 2024 00:36:58.365 read: IOPS=1558, BW=6234KiB/s (6383kB/s)(6240KiB/1001msec) 00:36:58.365 slat (nsec): min=5692, max=46517, avg=10315.60, stdev=4804.20 00:36:58.365 clat (usec): min=145, max=40389, avg=344.09, stdev=1023.40 00:36:58.365 lat (usec): min=154, max=40399, avg=354.41, stdev=1023.37 00:36:58.365 clat percentiles (usec): 00:36:58.365 | 1.00th=[ 210], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 239], 00:36:58.365 | 30.00th=[ 249], 40.00th=[ 293], 50.00th=[ 306], 60.00th=[ 318], 00:36:58.365 | 70.00th=[ 379], 80.00th=[ 400], 90.00th=[ 416], 95.00th=[ 424], 00:36:58.365 | 99.00th=[ 445], 99.50th=[ 519], 99.90th=[ 3458], 99.95th=[40633], 00:36:58.365 | 99.99th=[40633] 00:36:58.365 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:36:58.365 slat (usec): min=7, max=297, avg=16.01, stdev=11.78 00:36:58.365 clat (nsec): min=1165, max=402966, avg=200020.23, stdev=26964.01 00:36:58.365 lat (usec): min=103, max=546, avg=216.03, stdev=26.31 00:36:58.365 clat percentiles (usec): 00:36:58.365 | 1.00th=[ 133], 5.00th=[ 161], 10.00th=[ 167], 20.00th=[ 182], 00:36:58.365 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 204], 00:36:58.365 | 70.00th=[ 210], 80.00th=[ 219], 90.00th=[ 231], 95.00th=[ 241], 00:36:58.365 | 99.00th=[ 273], 99.50th=[ 302], 99.90th=[ 343], 99.95th=[ 351], 00:36:58.365 | 99.99th=[ 404] 00:36:58.365 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:36:58.365 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:36:58.365 lat (usec) : 2=0.03%, 50=0.03%, 100=0.28%, 250=68.51%, 500=30.93% 00:36:58.365 lat (usec) : 750=0.08%, 1000=0.03% 00:36:58.365 lat (msec) : 2=0.03%, 4=0.06%, 50=0.03% 00:36:58.365 cpu : usr=0.90%, sys=3.90%, ctx=3627, majf=0, minf=5 00:36:58.365 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:58.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:58.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:58.365 issued rwts: total=1560,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:58.365 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:58.365 job2: (groupid=0, jobs=1): err= 0: pid=74932: Wed Apr 17 08:33:31 2024 00:36:58.365 read: IOPS=1791, BW=7165KiB/s (7337kB/s)(7172KiB/1001msec) 00:36:58.365 slat (nsec): min=6615, max=30120, avg=10266.77, stdev=2715.49 00:36:58.365 clat (usec): min=171, max=1000, avg=300.82, stdev=81.89 00:36:58.365 lat (usec): min=179, max=1019, avg=311.09, stdev=82.54 00:36:58.365 clat percentiles (usec): 00:36:58.365 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 194], 20.00th=[ 235], 00:36:58.365 | 30.00th=[ 243], 40.00th=[ 251], 50.00th=[ 297], 60.00th=[ 310], 00:36:58.365 | 70.00th=[ 330], 80.00th=[ 404], 90.00th=[ 416], 95.00th=[ 424], 00:36:58.365 | 99.00th=[ 445], 99.50th=[ 529], 99.90th=[ 619], 99.95th=[ 1004], 00:36:58.365 | 99.99th=[ 1004] 00:36:58.365 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:36:58.365 slat (usec): min=9, max=135, avg=15.00, stdev= 8.29 00:36:58.365 clat (usec): min=126, max=357, avg=198.36, stdev=21.24 00:36:58.365 lat (usec): min=139, max=492, avg=213.36, stdev=21.39 00:36:58.365 clat percentiles (usec): 00:36:58.365 | 1.00th=[ 145], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 182], 00:36:58.365 | 30.00th=[ 188], 40.00th=[ 194], 50.00th=[ 200], 60.00th=[ 204], 00:36:58.365 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 223], 95.00th=[ 235], 00:36:58.365 | 99.00th=[ 255], 99.50th=[ 260], 99.90th=[ 281], 99.95th=[ 322], 00:36:58.365 | 99.99th=[ 359] 00:36:58.365 bw ( KiB/s): min= 8856, max= 8856, per=27.05%, avg=8856.00, stdev= 0.00, samples=1 00:36:58.365 iops : min= 2214, max= 2214, avg=2214.00, stdev= 0.00, samples=1 00:36:58.365 lat (usec) : 250=70.48%, 500=29.29%, 750=0.21% 00:36:58.365 lat (msec) : 2=0.03% 00:36:58.365 cpu : usr=1.20%, sys=3.60%, ctx=3841, majf=0, minf=9 00:36:58.365 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:58.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:58.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:58.365 issued rwts: total=1793,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:58.365 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:58.365 job3: (groupid=0, jobs=1): err= 0: pid=74933: Wed Apr 17 08:33:31 2024 00:36:58.365 read: IOPS=1553, BW=6214KiB/s (6363kB/s)(6220KiB/1001msec) 00:36:58.365 slat (nsec): min=5649, max=40627, avg=9787.95, stdev=3973.54 00:36:58.365 clat (usec): min=130, max=40462, avg=345.78, stdev=1026.61 00:36:58.365 lat (usec): min=145, max=40469, avg=355.57, stdev=1026.59 00:36:58.365 clat percentiles (usec): 00:36:58.365 | 1.00th=[ 212], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 241], 00:36:58.365 | 30.00th=[ 251], 40.00th=[ 293], 50.00th=[ 310], 60.00th=[ 318], 00:36:58.365 | 70.00th=[ 379], 80.00th=[ 400], 90.00th=[ 416], 95.00th=[ 424], 00:36:58.365 | 99.00th=[ 461], 99.50th=[ 486], 99.90th=[ 3392], 99.95th=[40633], 00:36:58.365 | 99.99th=[40633] 00:36:58.365 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:36:58.365 slat (usec): min=8, max=232, avg=16.43, stdev=10.62 00:36:58.365 clat (usec): min=41, max=381, avg=199.42, stdev=27.27 00:36:58.365 lat (usec): min=113, max=394, avg=215.85, stdev=26.30 00:36:58.365 clat percentiles (usec): 00:36:58.365 | 1.00th=[ 130], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 180], 00:36:58.365 | 30.00th=[ 186], 40.00th=[ 194], 50.00th=[ 200], 60.00th=[ 206], 00:36:58.365 | 70.00th=[ 212], 80.00th=[ 219], 90.00th=[ 229], 95.00th=[ 237], 00:36:58.365 | 99.00th=[ 293], 99.50th=[ 310], 99.90th=[ 334], 99.95th=[ 363], 00:36:58.365 | 99.99th=[ 383] 00:36:58.365 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:36:58.365 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:36:58.365 lat (usec) : 50=0.03%, 100=0.06%, 250=68.50%, 500=31.22%, 750=0.08% 00:36:58.365 lat (msec) : 2=0.03%, 4=0.06%, 50=0.03% 00:36:58.365 cpu : usr=1.20%, sys=3.70%, ctx=3619, majf=0, minf=14 00:36:58.365 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:58.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:58.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:58.365 issued rwts: total=1555,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:58.365 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:58.365 00:36:58.365 Run status group 0 (all jobs): 00:36:58.365 READ: bw=26.1MiB/s (27.4MB/s), 6214KiB/s-7165KiB/s (6363kB/s-7337kB/s), io=26.2MiB (27.4MB), run=1001-1001msec 00:36:58.365 WRITE: bw=32.0MiB/s (33.5MB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=32.0MiB (33.6MB), run=1001-1001msec 00:36:58.365 00:36:58.366 Disk stats (read/write): 00:36:58.366 nvme0n1: ios=1586/1988, merge=0/0, ticks=449/391, in_queue=840, util=89.28% 00:36:58.366 nvme0n2: ios=1579/1536, merge=0/0, ticks=532/302, in_queue=834, util=89.21% 00:36:58.366 nvme0n3: ios=1577/1991, merge=0/0, ticks=453/398, in_queue=851, util=90.62% 00:36:58.366 nvme0n4: ios=1554/1536, merge=0/0, ticks=561/311, in_queue=872, util=90.69% 00:36:58.366 08:33:31 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:36:58.366 [global] 00:36:58.366 thread=1 00:36:58.366 invalidate=1 00:36:58.366 rw=randwrite 00:36:58.366 time_based=1 00:36:58.366 runtime=1 00:36:58.366 ioengine=libaio 00:36:58.366 direct=1 00:36:58.366 bs=4096 00:36:58.366 iodepth=1 00:36:58.366 norandommap=0 00:36:58.366 numjobs=1 00:36:58.366 00:36:58.366 verify_dump=1 00:36:58.366 verify_backlog=512 00:36:58.366 verify_state_save=0 00:36:58.366 do_verify=1 00:36:58.366 verify=crc32c-intel 00:36:58.366 [job0] 00:36:58.366 filename=/dev/nvme0n1 00:36:58.366 [job1] 00:36:58.366 filename=/dev/nvme0n2 00:36:58.366 [job2] 00:36:58.366 filename=/dev/nvme0n3 00:36:58.366 [job3] 00:36:58.366 filename=/dev/nvme0n4 00:36:58.644 Could not set queue depth (nvme0n1) 00:36:58.644 Could not set queue depth (nvme0n2) 00:36:58.644 Could not set queue depth (nvme0n3) 00:36:58.644 Could not set queue depth (nvme0n4) 00:36:58.644 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:58.644 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:58.644 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:58.644 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:58.644 fio-3.35 00:36:58.644 Starting 4 threads 00:37:00.019 00:37:00.019 job0: (groupid=0, jobs=1): err= 0: pid=74986: Wed Apr 17 08:33:33 2024 00:37:00.019 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:37:00.019 slat (nsec): min=5214, max=29652, avg=7986.81, stdev=2258.30 00:37:00.019 clat (usec): min=116, max=1400, avg=233.44, stdev=50.17 00:37:00.019 lat (usec): min=124, max=1408, avg=241.42, stdev=49.88 00:37:00.019 clat percentiles (usec): 00:37:00.019 | 1.00th=[ 129], 5.00th=[ 141], 10.00th=[ 153], 20.00th=[ 219], 00:37:00.019 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 245], 00:37:00.019 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 269], 95.00th=[ 281], 00:37:00.019 | 99.00th=[ 343], 99.50th=[ 351], 99.90th=[ 498], 99.95th=[ 545], 00:37:00.019 | 99.99th=[ 1401] 00:37:00.019 write: IOPS=2395, BW=9582KiB/s (9812kB/s)(9592KiB/1001msec); 0 zone resets 00:37:00.019 slat (usec): min=7, max=134, avg=15.25, stdev= 7.60 00:37:00.019 clat (usec): min=83, max=7914, avg=193.47, stdev=166.03 00:37:00.019 lat (usec): min=111, max=7927, avg=208.72, stdev=165.53 00:37:00.019 clat percentiles (usec): 00:37:00.019 | 1.00th=[ 106], 5.00th=[ 116], 10.00th=[ 123], 20.00th=[ 141], 00:37:00.019 | 30.00th=[ 186], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 208], 00:37:00.019 | 70.00th=[ 212], 80.00th=[ 219], 90.00th=[ 229], 95.00th=[ 237], 00:37:00.019 | 99.00th=[ 255], 99.50th=[ 269], 99.90th=[ 441], 99.95th=[ 1844], 00:37:00.019 | 99.99th=[ 7898] 00:37:00.019 bw ( KiB/s): min=10552, max=10552, per=22.15%, avg=10552.00, stdev= 0.00, samples=1 00:37:00.019 iops : min= 2638, max= 2638, avg=2638.00, stdev= 0.00, samples=1 00:37:00.019 lat (usec) : 100=0.11%, 250=84.75%, 500=15.05%, 750=0.02% 00:37:00.019 lat (msec) : 2=0.04%, 10=0.02% 00:37:00.019 cpu : usr=0.80%, sys=4.20%, ctx=4448, majf=0, minf=8 00:37:00.019 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:00.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.019 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.019 issued rwts: total=2048,2398,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.019 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:00.019 job1: (groupid=0, jobs=1): err= 0: pid=74987: Wed Apr 17 08:33:33 2024 00:37:00.019 read: IOPS=3461, BW=13.5MiB/s (14.2MB/s)(13.5MiB/1001msec) 00:37:00.019 slat (nsec): min=7716, max=88308, avg=9900.44, stdev=4253.08 00:37:00.020 clat (usec): min=111, max=426, avg=139.05, stdev=12.49 00:37:00.020 lat (usec): min=120, max=435, avg=148.95, stdev=14.09 00:37:00.020 clat percentiles (usec): 00:37:00.020 | 1.00th=[ 118], 5.00th=[ 123], 10.00th=[ 126], 20.00th=[ 130], 00:37:00.020 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 141], 00:37:00.020 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 153], 95.00th=[ 159], 00:37:00.020 | 99.00th=[ 172], 99.50th=[ 180], 99.90th=[ 200], 99.95th=[ 260], 00:37:00.020 | 99.99th=[ 429] 00:37:00.020 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:37:00.020 slat (usec): min=11, max=128, avg=15.63, stdev= 8.05 00:37:00.020 clat (usec): min=67, max=1976, avg=116.91, stdev=44.69 00:37:00.020 lat (usec): min=96, max=2005, avg=132.53, stdev=46.27 00:37:00.020 clat percentiles (usec): 00:37:00.020 | 1.00th=[ 95], 5.00th=[ 99], 10.00th=[ 102], 20.00th=[ 106], 00:37:00.020 | 30.00th=[ 110], 40.00th=[ 112], 50.00th=[ 115], 60.00th=[ 117], 00:37:00.020 | 70.00th=[ 121], 80.00th=[ 125], 90.00th=[ 133], 95.00th=[ 139], 00:37:00.020 | 99.00th=[ 155], 99.50th=[ 165], 99.90th=[ 281], 99.95th=[ 1860], 00:37:00.020 | 99.99th=[ 1975] 00:37:00.020 bw ( KiB/s): min=15728, max=15728, per=33.01%, avg=15728.00, stdev= 0.00, samples=1 00:37:00.020 iops : min= 3932, max= 3932, avg=3932.00, stdev= 0.00, samples=1 00:37:00.020 lat (usec) : 100=3.02%, 250=96.89%, 500=0.06% 00:37:00.020 lat (msec) : 2=0.03% 00:37:00.020 cpu : usr=1.60%, sys=6.60%, ctx=7064, majf=0, minf=5 00:37:00.020 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:00.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.020 issued rwts: total=3465,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.020 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:00.020 job2: (groupid=0, jobs=1): err= 0: pid=74988: Wed Apr 17 08:33:33 2024 00:37:00.020 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:37:00.020 slat (nsec): min=7748, max=30033, avg=9166.70, stdev=1507.13 00:37:00.020 clat (usec): min=120, max=226, avg=150.00, stdev=11.94 00:37:00.020 lat (usec): min=128, max=235, avg=159.17, stdev=12.28 00:37:00.020 clat percentiles (usec): 00:37:00.020 | 1.00th=[ 128], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 141], 00:37:00.020 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 151], 00:37:00.020 | 70.00th=[ 155], 80.00th=[ 159], 90.00th=[ 165], 95.00th=[ 172], 00:37:00.020 | 99.00th=[ 188], 99.50th=[ 198], 99.90th=[ 215], 99.95th=[ 219], 00:37:00.020 | 99.99th=[ 227] 00:37:00.020 write: IOPS=3485, BW=13.6MiB/s (14.3MB/s)(13.6MiB/1001msec); 0 zone resets 00:37:00.020 slat (usec): min=11, max=139, avg=16.12, stdev= 9.53 00:37:00.020 clat (usec): min=95, max=1753, avg=128.26, stdev=47.38 00:37:00.020 lat (usec): min=108, max=1791, avg=144.38, stdev=49.78 00:37:00.020 clat percentiles (usec): 00:37:00.020 | 1.00th=[ 104], 5.00th=[ 110], 10.00th=[ 113], 20.00th=[ 117], 00:37:00.020 | 30.00th=[ 120], 40.00th=[ 123], 50.00th=[ 126], 60.00th=[ 129], 00:37:00.020 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 143], 95.00th=[ 149], 00:37:00.020 | 99.00th=[ 167], 99.50th=[ 178], 99.90th=[ 1205], 99.95th=[ 1385], 00:37:00.020 | 99.99th=[ 1762] 00:37:00.020 bw ( KiB/s): min=14264, max=14264, per=29.94%, avg=14264.00, stdev= 0.00, samples=1 00:37:00.020 iops : min= 3566, max= 3566, avg=3566.00, stdev= 0.00, samples=1 00:37:00.020 lat (usec) : 100=0.15%, 250=99.70%, 500=0.08%, 750=0.02% 00:37:00.020 lat (msec) : 2=0.06% 00:37:00.020 cpu : usr=1.20%, sys=6.50%, ctx=6562, majf=0, minf=23 00:37:00.020 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:00.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.020 issued rwts: total=3072,3489,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.020 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:00.020 job3: (groupid=0, jobs=1): err= 0: pid=74989: Wed Apr 17 08:33:33 2024 00:37:00.020 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:37:00.020 slat (nsec): min=6145, max=27960, avg=8436.27, stdev=1866.46 00:37:00.020 clat (usec): min=118, max=501, avg=230.17, stdev=40.63 00:37:00.020 lat (usec): min=127, max=508, avg=238.61, stdev=40.34 00:37:00.020 clat percentiles (usec): 00:37:00.020 | 1.00th=[ 135], 5.00th=[ 147], 10.00th=[ 155], 20.00th=[ 210], 00:37:00.020 | 30.00th=[ 225], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 245], 00:37:00.020 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 269], 95.00th=[ 277], 00:37:00.020 | 99.00th=[ 326], 99.50th=[ 338], 99.90th=[ 359], 99.95th=[ 396], 00:37:00.020 | 99.99th=[ 502] 00:37:00.020 write: IOPS=2449, BW=9798KiB/s (10.0MB/s)(9808KiB/1001msec); 0 zone resets 00:37:00.020 slat (usec): min=6, max=119, avg=14.82, stdev= 7.15 00:37:00.020 clat (usec): min=102, max=564, avg=191.63, stdev=37.93 00:37:00.020 lat (usec): min=114, max=578, avg=206.45, stdev=36.64 00:37:00.020 clat percentiles (usec): 00:37:00.020 | 1.00th=[ 114], 5.00th=[ 124], 10.00th=[ 130], 20.00th=[ 147], 00:37:00.020 | 30.00th=[ 188], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 208], 00:37:00.020 | 70.00th=[ 212], 80.00th=[ 219], 90.00th=[ 227], 95.00th=[ 237], 00:37:00.020 | 99.00th=[ 258], 99.50th=[ 277], 99.90th=[ 392], 99.95th=[ 519], 00:37:00.020 | 99.99th=[ 562] 00:37:00.020 bw ( KiB/s): min=10984, max=10984, per=23.05%, avg=10984.00, stdev= 0.00, samples=1 00:37:00.020 iops : min= 2746, max= 2746, avg=2746.00, stdev= 0.00, samples=1 00:37:00.020 lat (usec) : 250=85.38%, 500=14.56%, 750=0.07% 00:37:00.020 cpu : usr=0.90%, sys=4.20%, ctx=4500, majf=0, minf=9 00:37:00.020 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:00.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.020 issued rwts: total=2048,2452,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.020 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:00.020 00:37:00.020 Run status group 0 (all jobs): 00:37:00.020 READ: bw=41.5MiB/s (43.5MB/s), 8184KiB/s-13.5MiB/s (8380kB/s-14.2MB/s), io=41.5MiB (43.6MB), run=1001-1001msec 00:37:00.020 WRITE: bw=46.5MiB/s (48.8MB/s), 9582KiB/s-14.0MiB/s (9812kB/s-14.7MB/s), io=46.6MiB (48.8MB), run=1001-1001msec 00:37:00.020 00:37:00.020 Disk stats (read/write): 00:37:00.020 nvme0n1: ios=1930/2048, merge=0/0, ticks=444/387, in_queue=831, util=88.78% 00:37:00.020 nvme0n2: ios=3121/3082, merge=0/0, ticks=461/369, in_queue=830, util=89.33% 00:37:00.020 nvme0n3: ios=2762/3072, merge=0/0, ticks=464/422, in_queue=886, util=90.00% 00:37:00.020 nvme0n4: ios=1923/2048, merge=0/0, ticks=445/395, in_queue=840, util=89.96% 00:37:00.020 08:33:33 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:37:00.020 [global] 00:37:00.020 thread=1 00:37:00.020 invalidate=1 00:37:00.020 rw=write 00:37:00.020 time_based=1 00:37:00.020 runtime=1 00:37:00.020 ioengine=libaio 00:37:00.020 direct=1 00:37:00.020 bs=4096 00:37:00.020 iodepth=128 00:37:00.020 norandommap=0 00:37:00.020 numjobs=1 00:37:00.020 00:37:00.020 verify_dump=1 00:37:00.020 verify_backlog=512 00:37:00.020 verify_state_save=0 00:37:00.020 do_verify=1 00:37:00.020 verify=crc32c-intel 00:37:00.020 [job0] 00:37:00.020 filename=/dev/nvme0n1 00:37:00.020 [job1] 00:37:00.020 filename=/dev/nvme0n2 00:37:00.020 [job2] 00:37:00.020 filename=/dev/nvme0n3 00:37:00.020 [job3] 00:37:00.020 filename=/dev/nvme0n4 00:37:00.020 Could not set queue depth (nvme0n1) 00:37:00.020 Could not set queue depth (nvme0n2) 00:37:00.020 Could not set queue depth (nvme0n3) 00:37:00.020 Could not set queue depth (nvme0n4) 00:37:00.020 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:00.020 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:00.020 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:00.020 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:00.020 fio-3.35 00:37:00.020 Starting 4 threads 00:37:01.395 00:37:01.395 job0: (groupid=0, jobs=1): err= 0: pid=75047: Wed Apr 17 08:33:34 2024 00:37:01.395 read: IOPS=2041, BW=8167KiB/s (8364kB/s)(8192KiB/1003msec) 00:37:01.395 slat (usec): min=4, max=13715, avg=254.95, stdev=1201.10 00:37:01.395 clat (usec): min=16314, max=57082, avg=33294.36, stdev=8535.23 00:37:01.395 lat (usec): min=16771, max=57113, avg=33549.30, stdev=8526.41 00:37:01.395 clat percentiles (usec): 00:37:01.395 | 1.00th=[18482], 5.00th=[21103], 10.00th=[23462], 20.00th=[24249], 00:37:01.395 | 30.00th=[27919], 40.00th=[30802], 50.00th=[33817], 60.00th=[34866], 00:37:01.395 | 70.00th=[36439], 80.00th=[40109], 90.00th=[44827], 95.00th=[49021], 00:37:01.395 | 99.00th=[54789], 99.50th=[56886], 99.90th=[56886], 99.95th=[56886], 00:37:01.395 | 99.99th=[56886] 00:37:01.395 write: IOPS=2282, BW=9129KiB/s (9348kB/s)(9156KiB/1003msec); 0 zone resets 00:37:01.395 slat (usec): min=8, max=7909, avg=201.51, stdev=894.67 00:37:01.395 clat (usec): min=156, max=52345, avg=25162.22, stdev=9173.24 00:37:01.395 lat (usec): min=8065, max=52493, avg=25363.72, stdev=9203.12 00:37:01.395 clat percentiles (usec): 00:37:01.395 | 1.00th=[ 8848], 5.00th=[16319], 10.00th=[16581], 20.00th=[17433], 00:37:01.395 | 30.00th=[17695], 40.00th=[18744], 50.00th=[22676], 60.00th=[25822], 00:37:01.395 | 70.00th=[29230], 80.00th=[34341], 90.00th=[36439], 95.00th=[44303], 00:37:01.395 | 99.00th=[50594], 99.50th=[50594], 99.90th=[52167], 99.95th=[52167], 00:37:01.395 | 99.99th=[52167] 00:37:01.395 bw ( KiB/s): min= 8352, max= 8936, per=12.90%, avg=8644.00, stdev=412.95, samples=2 00:37:01.395 iops : min= 2088, max= 2234, avg=2161.00, stdev=103.24, samples=2 00:37:01.395 lat (usec) : 250=0.02% 00:37:01.395 lat (msec) : 10=0.78%, 20=23.03%, 50=73.25%, 100=2.91% 00:37:01.395 cpu : usr=1.50%, sys=7.98%, ctx=278, majf=0, minf=11 00:37:01.395 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:37:01.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:01.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:01.395 issued rwts: total=2048,2289,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:01.395 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:01.395 job1: (groupid=0, jobs=1): err= 0: pid=75048: Wed Apr 17 08:33:34 2024 00:37:01.395 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:37:01.395 slat (usec): min=7, max=3563, avg=82.20, stdev=332.98 00:37:01.395 clat (usec): min=7898, max=14561, avg=10945.73, stdev=1101.40 00:37:01.395 lat (usec): min=7943, max=15955, avg=11027.92, stdev=1080.51 00:37:01.395 clat percentiles (usec): 00:37:01.395 | 1.00th=[ 8455], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9896], 00:37:01.395 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11207], 60.00th=[11338], 00:37:01.395 | 70.00th=[11600], 80.00th=[11731], 90.00th=[12125], 95.00th=[12387], 00:37:01.395 | 99.00th=[13566], 99.50th=[14091], 99.90th=[14484], 99.95th=[14484], 00:37:01.395 | 99.99th=[14615] 00:37:01.395 write: IOPS=5803, BW=22.7MiB/s (23.8MB/s)(22.7MiB/1002msec); 0 zone resets 00:37:01.395 slat (usec): min=17, max=2860, avg=82.76, stdev=283.55 00:37:01.395 clat (usec): min=188, max=14913, avg=11179.20, stdev=1360.78 00:37:01.395 lat (usec): min=2846, max=14948, avg=11261.96, stdev=1349.34 00:37:01.395 clat percentiles (usec): 00:37:01.395 | 1.00th=[ 7308], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[10159], 00:37:01.395 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11469], 60.00th=[11600], 00:37:01.395 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12518], 95.00th=[12911], 00:37:01.395 | 99.00th=[14091], 99.50th=[14353], 99.90th=[14877], 99.95th=[14877], 00:37:01.395 | 99.99th=[14877] 00:37:01.395 bw ( KiB/s): min=21600, max=23943, per=34.00%, avg=22771.50, stdev=1656.75, samples=2 00:37:01.395 iops : min= 5400, max= 5985, avg=5692.50, stdev=413.66, samples=2 00:37:01.395 lat (usec) : 250=0.01% 00:37:01.395 lat (msec) : 4=0.35%, 10=18.97%, 20=80.67% 00:37:01.395 cpu : usr=6.09%, sys=22.98%, ctx=782, majf=0, minf=7 00:37:01.395 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:37:01.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:01.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:01.395 issued rwts: total=5632,5815,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:01.395 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:01.395 job2: (groupid=0, jobs=1): err= 0: pid=75049: Wed Apr 17 08:33:34 2024 00:37:01.395 read: IOPS=4616, BW=18.0MiB/s (18.9MB/s)(18.1MiB/1002msec) 00:37:01.395 slat (usec): min=7, max=2958, avg=96.18, stdev=376.26 00:37:01.395 clat (usec): min=1138, max=17916, avg=12872.31, stdev=1357.65 00:37:01.395 lat (usec): min=1157, max=17948, avg=12968.49, stdev=1330.09 00:37:01.395 clat percentiles (usec): 00:37:01.395 | 1.00th=[10421], 5.00th=[10814], 10.00th=[11207], 20.00th=[11863], 00:37:01.395 | 30.00th=[12387], 40.00th=[12780], 50.00th=[13042], 60.00th=[13173], 00:37:01.395 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14222], 95.00th=[14746], 00:37:01.395 | 99.00th=[15795], 99.50th=[16909], 99.90th=[17957], 99.95th=[17957], 00:37:01.395 | 99.99th=[17957] 00:37:01.395 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:37:01.395 slat (usec): min=22, max=4369, avg=98.63, stdev=349.12 00:37:01.395 clat (usec): min=3795, max=16763, avg=13069.71, stdev=1415.45 00:37:01.395 lat (usec): min=3843, max=16802, avg=13168.33, stdev=1425.52 00:37:01.395 clat percentiles (usec): 00:37:01.395 | 1.00th=[ 8160], 5.00th=[11469], 10.00th=[11731], 20.00th=[12125], 00:37:01.395 | 30.00th=[12387], 40.00th=[12649], 50.00th=[12911], 60.00th=[13173], 00:37:01.395 | 70.00th=[13698], 80.00th=[14353], 90.00th=[14877], 95.00th=[15139], 00:37:01.395 | 99.00th=[16319], 99.50th=[16450], 99.90th=[16712], 99.95th=[16712], 00:37:01.395 | 99.99th=[16712] 00:37:01.395 bw ( KiB/s): min=19608, max=20521, per=29.95%, avg=20064.50, stdev=645.59, samples=2 00:37:01.395 iops : min= 4902, max= 5130, avg=5016.00, stdev=161.22, samples=2 00:37:01.395 lat (msec) : 2=0.12%, 4=0.10%, 10=0.63%, 20=99.15% 00:37:01.395 cpu : usr=5.49%, sys=19.98%, ctx=766, majf=0, minf=10 00:37:01.395 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:37:01.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:01.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:01.395 issued rwts: total=4626,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:01.395 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:01.395 job3: (groupid=0, jobs=1): err= 0: pid=75050: Wed Apr 17 08:33:34 2024 00:37:01.395 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:37:01.395 slat (usec): min=7, max=10028, avg=135.52, stdev=710.04 00:37:01.395 clat (usec): min=10514, max=28770, avg=17171.24, stdev=2316.34 00:37:01.395 lat (usec): min=10546, max=29019, avg=17306.77, stdev=2399.43 00:37:01.395 clat percentiles (usec): 00:37:01.395 | 1.00th=[11207], 5.00th=[13960], 10.00th=[14877], 20.00th=[15664], 00:37:01.395 | 30.00th=[15926], 40.00th=[16450], 50.00th=[16909], 60.00th=[17171], 00:37:01.395 | 70.00th=[17957], 80.00th=[18744], 90.00th=[20055], 95.00th=[21365], 00:37:01.395 | 99.00th=[24773], 99.50th=[25560], 99.90th=[27132], 99.95th=[28181], 00:37:01.395 | 99.99th=[28705] 00:37:01.395 write: IOPS=3561, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:37:01.395 slat (usec): min=23, max=7133, avg=154.18, stdev=654.21 00:37:01.395 clat (usec): min=196, max=36793, avg=20675.87, stdev=6609.20 00:37:01.395 lat (usec): min=7077, max=36826, avg=20830.06, stdev=6660.12 00:37:01.395 clat percentiles (usec): 00:37:01.395 | 1.00th=[ 8356], 5.00th=[13829], 10.00th=[14615], 20.00th=[15139], 00:37:01.395 | 30.00th=[16450], 40.00th=[17171], 50.00th=[17695], 60.00th=[20579], 00:37:01.395 | 70.00th=[23200], 80.00th=[25822], 90.00th=[32375], 95.00th=[34341], 00:37:01.395 | 99.00th=[36439], 99.50th=[36439], 99.90th=[36963], 99.95th=[36963], 00:37:01.395 | 99.99th=[36963] 00:37:01.395 bw ( KiB/s): min=13266, max=14312, per=20.59%, avg=13789.00, stdev=739.63, samples=2 00:37:01.395 iops : min= 3316, max= 3578, avg=3447.00, stdev=185.26, samples=2 00:37:01.395 lat (usec) : 250=0.02% 00:37:01.395 lat (msec) : 10=0.98%, 20=72.14%, 50=26.87% 00:37:01.395 cpu : usr=4.09%, sys=13.77%, ctx=336, majf=0, minf=13 00:37:01.395 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:37:01.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:01.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:01.395 issued rwts: total=3072,3572,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:01.395 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:01.396 00:37:01.396 Run status group 0 (all jobs): 00:37:01.396 READ: bw=59.9MiB/s (62.8MB/s), 8167KiB/s-22.0MiB/s (8364kB/s-23.0MB/s), io=60.1MiB (63.0MB), run=1002-1003msec 00:37:01.396 WRITE: bw=65.4MiB/s (68.6MB/s), 9129KiB/s-22.7MiB/s (9348kB/s-23.8MB/s), io=65.6MiB (68.8MB), run=1002-1003msec 00:37:01.396 00:37:01.396 Disk stats (read/write): 00:37:01.396 nvme0n1: ios=1841/2048, merge=0/0, ticks=14383/11925, in_queue=26308, util=89.28% 00:37:01.396 nvme0n2: ios=4897/5120, merge=0/0, ticks=15770/15678, in_queue=31448, util=89.63% 00:37:01.396 nvme0n3: ios=4125/4433, merge=0/0, ticks=12026/12110, in_queue=24136, util=90.21% 00:37:01.396 nvme0n4: ios=2639/3072, merge=0/0, ticks=21237/29360, in_queue=50597, util=89.98% 00:37:01.396 08:33:34 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:37:01.396 [global] 00:37:01.396 thread=1 00:37:01.396 invalidate=1 00:37:01.396 rw=randwrite 00:37:01.396 time_based=1 00:37:01.396 runtime=1 00:37:01.396 ioengine=libaio 00:37:01.396 direct=1 00:37:01.396 bs=4096 00:37:01.396 iodepth=128 00:37:01.396 norandommap=0 00:37:01.396 numjobs=1 00:37:01.396 00:37:01.396 verify_dump=1 00:37:01.396 verify_backlog=512 00:37:01.396 verify_state_save=0 00:37:01.396 do_verify=1 00:37:01.396 verify=crc32c-intel 00:37:01.396 [job0] 00:37:01.396 filename=/dev/nvme0n1 00:37:01.396 [job1] 00:37:01.396 filename=/dev/nvme0n2 00:37:01.396 [job2] 00:37:01.396 filename=/dev/nvme0n3 00:37:01.396 [job3] 00:37:01.396 filename=/dev/nvme0n4 00:37:01.396 Could not set queue depth (nvme0n1) 00:37:01.396 Could not set queue depth (nvme0n2) 00:37:01.396 Could not set queue depth (nvme0n3) 00:37:01.396 Could not set queue depth (nvme0n4) 00:37:01.396 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:01.396 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:01.396 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:01.396 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:01.396 fio-3.35 00:37:01.396 Starting 4 threads 00:37:02.773 00:37:02.773 job0: (groupid=0, jobs=1): err= 0: pid=75110: Wed Apr 17 08:33:35 2024 00:37:02.773 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:37:02.773 slat (usec): min=4, max=11238, avg=104.24, stdev=530.13 00:37:02.773 clat (usec): min=7112, max=38803, avg=13841.85, stdev=5786.96 00:37:02.773 lat (usec): min=7149, max=38832, avg=13946.08, stdev=5830.82 00:37:02.773 clat percentiles (usec): 00:37:02.773 | 1.00th=[ 7570], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[10552], 00:37:02.773 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11469], 60.00th=[11731], 00:37:02.773 | 70.00th=[12125], 80.00th=[17957], 90.00th=[23725], 95.00th=[27657], 00:37:02.773 | 99.00th=[30802], 99.50th=[32375], 99.90th=[33817], 99.95th=[34866], 00:37:02.773 | 99.99th=[39060] 00:37:02.773 write: IOPS=4796, BW=18.7MiB/s (19.6MB/s)(18.8MiB/1003msec); 0 zone resets 00:37:02.773 slat (usec): min=7, max=6094, avg=98.35, stdev=408.15 00:37:02.773 clat (usec): min=2431, max=30375, avg=13095.50, stdev=3978.54 00:37:02.773 lat (usec): min=2466, max=31298, avg=13193.85, stdev=3992.79 00:37:02.773 clat percentiles (usec): 00:37:02.773 | 1.00th=[ 6194], 5.00th=[ 8455], 10.00th=[ 9503], 20.00th=[10814], 00:37:02.773 | 30.00th=[11338], 40.00th=[11863], 50.00th=[11994], 60.00th=[12256], 00:37:02.773 | 70.00th=[12780], 80.00th=[15401], 90.00th=[19268], 95.00th=[20579], 00:37:02.773 | 99.00th=[26870], 99.50th=[28443], 99.90th=[30278], 99.95th=[30278], 00:37:02.773 | 99.99th=[30278] 00:37:02.773 bw ( KiB/s): min=13752, max=23767, per=30.73%, avg=18759.50, stdev=7081.67, samples=2 00:37:02.773 iops : min= 3438, max= 5941, avg=4689.50, stdev=1769.89, samples=2 00:37:02.774 lat (msec) : 4=0.19%, 10=12.40%, 20=76.45%, 50=10.96% 00:37:02.774 cpu : usr=5.79%, sys=17.66%, ctx=721, majf=0, minf=11 00:37:02.774 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:37:02.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:02.774 issued rwts: total=4608,4811,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:02.774 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:02.774 job1: (groupid=0, jobs=1): err= 0: pid=75111: Wed Apr 17 08:33:35 2024 00:37:02.774 read: IOPS=3941, BW=15.4MiB/s (16.1MB/s)(15.5MiB/1005msec) 00:37:02.774 slat (usec): min=6, max=11991, avg=109.31, stdev=673.17 00:37:02.774 clat (usec): min=4010, max=38179, avg=14026.16, stdev=4538.70 00:37:02.774 lat (usec): min=4850, max=38200, avg=14135.48, stdev=4586.53 00:37:02.774 clat percentiles (usec): 00:37:02.774 | 1.00th=[ 5669], 5.00th=[ 8029], 10.00th=[ 9765], 20.00th=[10814], 00:37:02.774 | 30.00th=[12256], 40.00th=[12911], 50.00th=[13435], 60.00th=[14091], 00:37:02.774 | 70.00th=[15008], 80.00th=[16450], 90.00th=[17695], 95.00th=[21103], 00:37:02.774 | 99.00th=[33424], 99.50th=[34866], 99.90th=[38011], 99.95th=[38011], 00:37:02.774 | 99.99th=[38011] 00:37:02.774 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:37:02.774 slat (usec): min=9, max=13353, avg=128.87, stdev=657.95 00:37:02.774 clat (usec): min=4034, max=80839, avg=17500.32, stdev=13812.41 00:37:02.774 lat (usec): min=4066, max=80852, avg=17629.19, stdev=13904.71 00:37:02.774 clat percentiles (usec): 00:37:02.774 | 1.00th=[ 4490], 5.00th=[ 7504], 10.00th=[ 8717], 20.00th=[ 9634], 00:37:02.774 | 30.00th=[ 9765], 40.00th=[10552], 50.00th=[11207], 60.00th=[13698], 00:37:02.774 | 70.00th=[15270], 80.00th=[25560], 90.00th=[33817], 95.00th=[51643], 00:37:02.774 | 99.00th=[70779], 99.50th=[77071], 99.90th=[81265], 99.95th=[81265], 00:37:02.774 | 99.99th=[81265] 00:37:02.774 bw ( KiB/s): min=12288, max=20480, per=26.84%, avg=16384.00, stdev=5792.62, samples=2 00:37:02.774 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:37:02.774 lat (msec) : 10=25.44%, 20=58.66%, 50=13.08%, 100=2.82% 00:37:02.774 cpu : usr=5.38%, sys=13.15%, ctx=470, majf=0, minf=12 00:37:02.774 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:37:02.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:02.774 issued rwts: total=3961,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:02.774 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:02.774 job2: (groupid=0, jobs=1): err= 0: pid=75112: Wed Apr 17 08:33:35 2024 00:37:02.774 read: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec) 00:37:02.774 slat (usec): min=5, max=35088, avg=214.88, stdev=1545.61 00:37:02.774 clat (usec): min=10650, max=76358, avg=27034.34, stdev=13852.88 00:37:02.774 lat (usec): min=10759, max=76401, avg=27249.22, stdev=13985.53 00:37:02.774 clat percentiles (usec): 00:37:02.774 | 1.00th=[11338], 5.00th=[13698], 10.00th=[14222], 20.00th=[15008], 00:37:02.774 | 30.00th=[15533], 40.00th=[17171], 50.00th=[23200], 60.00th=[29754], 00:37:02.774 | 70.00th=[32375], 80.00th=[40109], 90.00th=[43254], 95.00th=[60031], 00:37:02.774 | 99.00th=[70779], 99.50th=[70779], 99.90th=[70779], 99.95th=[70779], 00:37:02.774 | 99.99th=[76022] 00:37:02.774 write: IOPS=2120, BW=8481KiB/s (8684kB/s)(8540KiB/1007msec); 0 zone resets 00:37:02.774 slat (usec): min=12, max=26176, avg=252.70, stdev=1418.41 00:37:02.774 clat (usec): min=5582, max=69297, avg=33056.32, stdev=13586.07 00:37:02.774 lat (usec): min=10406, max=69368, avg=33309.02, stdev=13682.96 00:37:02.774 clat percentiles (usec): 00:37:02.774 | 1.00th=[11600], 5.00th=[15270], 10.00th=[16581], 20.00th=[23987], 00:37:02.774 | 30.00th=[25822], 40.00th=[26870], 50.00th=[29754], 60.00th=[32900], 00:37:02.774 | 70.00th=[35390], 80.00th=[42730], 90.00th=[54789], 95.00th=[62653], 00:37:02.774 | 99.00th=[68682], 99.50th=[68682], 99.90th=[68682], 99.95th=[68682], 00:37:02.774 | 99.99th=[69731] 00:37:02.774 bw ( KiB/s): min= 6304, max=10100, per=13.44%, avg=8202.00, stdev=2684.18, samples=2 00:37:02.774 iops : min= 1576, max= 2525, avg=2050.50, stdev=671.04, samples=2 00:37:02.774 lat (msec) : 10=0.02%, 20=30.38%, 50=57.71%, 100=11.88% 00:37:02.774 cpu : usr=3.08%, sys=7.65%, ctx=318, majf=0, minf=9 00:37:02.774 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:37:02.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:02.774 issued rwts: total=2048,2135,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:02.774 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:02.774 job3: (groupid=0, jobs=1): err= 0: pid=75113: Wed Apr 17 08:33:35 2024 00:37:02.774 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:37:02.774 slat (usec): min=5, max=10747, avg=121.27, stdev=611.60 00:37:02.774 clat (usec): min=7412, max=38983, avg=15656.67, stdev=6057.67 00:37:02.774 lat (usec): min=7445, max=39002, avg=15777.93, stdev=6097.84 00:37:02.774 clat percentiles (usec): 00:37:02.774 | 1.00th=[ 8356], 5.00th=[ 9896], 10.00th=[11076], 20.00th=[12125], 00:37:02.774 | 30.00th=[12387], 40.00th=[12780], 50.00th=[13042], 60.00th=[13566], 00:37:02.774 | 70.00th=[15270], 80.00th=[19268], 90.00th=[26084], 95.00th=[30278], 00:37:02.774 | 99.00th=[34341], 99.50th=[35390], 99.90th=[39060], 99.95th=[39060], 00:37:02.774 | 99.99th=[39060] 00:37:02.774 write: IOPS=4321, BW=16.9MiB/s (17.7MB/s)(16.9MiB/1001msec); 0 zone resets 00:37:02.774 slat (usec): min=7, max=5888, avg=106.66, stdev=438.59 00:37:02.774 clat (usec): min=259, max=27565, avg=14413.51, stdev=3682.86 00:37:02.774 lat (usec): min=3459, max=27599, avg=14520.17, stdev=3688.69 00:37:02.774 clat percentiles (usec): 00:37:02.774 | 1.00th=[ 6652], 5.00th=[ 9372], 10.00th=[11469], 20.00th=[12518], 00:37:02.774 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13173], 60.00th=[13435], 00:37:02.774 | 70.00th=[14091], 80.00th=[17695], 90.00th=[20055], 95.00th=[22152], 00:37:02.774 | 99.00th=[23987], 99.50th=[25822], 99.90th=[27132], 99.95th=[27132], 00:37:02.774 | 99.99th=[27657] 00:37:02.774 bw ( KiB/s): min=13112, max=20521, per=27.55%, avg=16816.50, stdev=5238.95, samples=2 00:37:02.774 iops : min= 3278, max= 5130, avg=4204.00, stdev=1309.56, samples=2 00:37:02.774 lat (usec) : 500=0.01% 00:37:02.774 lat (msec) : 4=0.19%, 10=5.91%, 20=79.83%, 50=14.06% 00:37:02.774 cpu : usr=4.80%, sys=16.30%, ctx=760, majf=0, minf=13 00:37:02.774 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:37:02.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:02.774 issued rwts: total=4096,4326,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:02.774 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:02.774 00:37:02.774 Run status group 0 (all jobs): 00:37:02.775 READ: bw=57.1MiB/s (59.8MB/s), 8135KiB/s-17.9MiB/s (8330kB/s-18.8MB/s), io=57.5MiB (60.3MB), run=1001-1007msec 00:37:02.775 WRITE: bw=59.6MiB/s (62.5MB/s), 8481KiB/s-18.7MiB/s (8684kB/s-19.6MB/s), io=60.0MiB (62.9MB), run=1001-1007msec 00:37:02.775 00:37:02.775 Disk stats (read/write): 00:37:02.775 nvme0n1: ios=4146/4151, merge=0/0, ticks=22612/19746, in_queue=42358, util=88.97% 00:37:02.775 nvme0n2: ios=3182/3584, merge=0/0, ticks=41491/62956, in_queue=104447, util=89.92% 00:37:02.775 nvme0n3: ios=1832/2048, merge=0/0, ticks=18723/31803, in_queue=50526, util=90.94% 00:37:02.775 nvme0n4: ios=3618/3823, merge=0/0, ticks=22481/20550, in_queue=43031, util=90.09% 00:37:02.775 08:33:35 -- target/fio.sh@55 -- # sync 00:37:02.775 08:33:35 -- target/fio.sh@59 -- # fio_pid=75126 00:37:02.775 08:33:35 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:37:02.775 08:33:35 -- target/fio.sh@61 -- # sleep 3 00:37:02.775 [global] 00:37:02.775 thread=1 00:37:02.775 invalidate=1 00:37:02.775 rw=read 00:37:02.775 time_based=1 00:37:02.775 runtime=10 00:37:02.775 ioengine=libaio 00:37:02.775 direct=1 00:37:02.775 bs=4096 00:37:02.775 iodepth=1 00:37:02.775 norandommap=1 00:37:02.775 numjobs=1 00:37:02.775 00:37:02.775 [job0] 00:37:02.775 filename=/dev/nvme0n1 00:37:02.775 [job1] 00:37:02.775 filename=/dev/nvme0n2 00:37:02.775 [job2] 00:37:02.775 filename=/dev/nvme0n3 00:37:02.775 [job3] 00:37:02.775 filename=/dev/nvme0n4 00:37:02.775 Could not set queue depth (nvme0n1) 00:37:02.775 Could not set queue depth (nvme0n2) 00:37:02.775 Could not set queue depth (nvme0n3) 00:37:02.775 Could not set queue depth (nvme0n4) 00:37:02.775 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:02.775 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:02.775 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:02.775 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:02.775 fio-3.35 00:37:02.775 Starting 4 threads 00:37:06.127 08:33:38 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:37:06.127 fio: pid=75169, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:37:06.127 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=54870016, buflen=4096 00:37:06.127 08:33:39 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:37:06.127 fio: pid=75168, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:37:06.127 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=75227136, buflen=4096 00:37:06.127 08:33:39 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:06.127 08:33:39 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:37:06.385 fio: pid=75166, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:37:06.385 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=16764928, buflen=4096 00:37:06.385 08:33:39 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:06.385 08:33:39 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:37:06.644 fio: pid=75167, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:37:06.644 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=44036096, buflen=4096 00:37:06.644 00:37:06.644 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=75166: Wed Apr 17 08:33:39 2024 00:37:06.644 read: IOPS=6287, BW=24.6MiB/s (25.8MB/s)(80.0MiB/3257msec) 00:37:06.644 slat (usec): min=7, max=13349, avg=12.39, stdev=169.15 00:37:06.644 clat (usec): min=62, max=2446, avg=145.71, stdev=29.35 00:37:06.644 lat (usec): min=110, max=13516, avg=158.10, stdev=172.32 00:37:06.644 clat percentiles (usec): 00:37:06.644 | 1.00th=[ 119], 5.00th=[ 128], 10.00th=[ 133], 20.00th=[ 137], 00:37:06.644 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 147], 00:37:06.644 | 70.00th=[ 151], 80.00th=[ 155], 90.00th=[ 161], 95.00th=[ 165], 00:37:06.644 | 99.00th=[ 180], 99.50th=[ 188], 99.90th=[ 273], 99.95th=[ 510], 00:37:06.644 | 99.99th=[ 1614] 00:37:06.644 bw ( KiB/s): min=24405, max=25720, per=35.13%, avg=25139.50, stdev=510.58, samples=6 00:37:06.644 iops : min= 6101, max= 6430, avg=6284.83, stdev=127.72, samples=6 00:37:06.644 lat (usec) : 100=0.01%, 250=99.85%, 500=0.07%, 750=0.02%, 1000=0.01% 00:37:06.644 lat (msec) : 2=0.02%, 4=0.01% 00:37:06.644 cpu : usr=0.74%, sys=5.34%, ctx=20487, majf=0, minf=1 00:37:06.644 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:06.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.644 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.644 issued rwts: total=20478,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.644 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:06.644 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=75167: Wed Apr 17 08:33:39 2024 00:37:06.644 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(42.0MiB/3521msec) 00:37:06.644 slat (usec): min=6, max=13895, avg=21.21, stdev=235.43 00:37:06.644 clat (usec): min=96, max=5512, avg=304.73, stdev=154.40 00:37:06.644 lat (usec): min=103, max=14156, avg=325.94, stdev=282.46 00:37:06.644 clat percentiles (usec): 00:37:06.644 | 1.00th=[ 104], 5.00th=[ 110], 10.00th=[ 114], 20.00th=[ 129], 00:37:06.644 | 30.00th=[ 157], 40.00th=[ 351], 50.00th=[ 367], 60.00th=[ 379], 00:37:06.644 | 70.00th=[ 388], 80.00th=[ 400], 90.00th=[ 412], 95.00th=[ 433], 00:37:06.644 | 99.00th=[ 523], 99.50th=[ 562], 99.90th=[ 1156], 99.95th=[ 2474], 00:37:06.644 | 99.99th=[ 4293] 00:37:06.644 bw ( KiB/s): min= 9176, max=12437, per=14.20%, avg=10162.17, stdev=1151.79, samples=6 00:37:06.644 iops : min= 2294, max= 3109, avg=2540.50, stdev=287.85, samples=6 00:37:06.644 lat (usec) : 100=0.16%, 250=32.33%, 500=66.03%, 750=1.29%, 1000=0.02% 00:37:06.644 lat (msec) : 2=0.09%, 4=0.05%, 10=0.02% 00:37:06.644 cpu : usr=0.82%, sys=4.15%, ctx=10760, majf=0, minf=1 00:37:06.644 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:06.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.645 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.645 issued rwts: total=10752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.645 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:06.645 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=75168: Wed Apr 17 08:33:39 2024 00:37:06.645 read: IOPS=6010, BW=23.5MiB/s (24.6MB/s)(71.7MiB/3056msec) 00:37:06.645 slat (usec): min=5, max=11631, avg=11.03, stdev=117.26 00:37:06.645 clat (usec): min=102, max=3623, avg=154.43, stdev=50.44 00:37:06.645 lat (usec): min=111, max=11835, avg=165.46, stdev=128.23 00:37:06.645 clat percentiles (usec): 00:37:06.645 | 1.00th=[ 125], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 139], 00:37:06.645 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 151], 00:37:06.645 | 70.00th=[ 155], 80.00th=[ 159], 90.00th=[ 167], 95.00th=[ 192], 00:37:06.645 | 99.00th=[ 359], 99.50th=[ 383], 99.90th=[ 412], 99.95th=[ 437], 00:37:06.645 | 99.99th=[ 3195] 00:37:06.645 bw ( KiB/s): min=24824, max=25376, per=35.17%, avg=25164.80, stdev=243.48, samples=5 00:37:06.645 iops : min= 6206, max= 6344, avg=6291.20, stdev=60.87, samples=5 00:37:06.645 lat (usec) : 250=97.64%, 500=2.33% 00:37:06.645 lat (msec) : 2=0.02%, 4=0.01% 00:37:06.645 cpu : usr=0.75%, sys=4.94%, ctx=18372, majf=0, minf=1 00:37:06.645 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:06.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.645 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.645 issued rwts: total=18367,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.645 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:06.645 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=75169: Wed Apr 17 08:33:39 2024 00:37:06.645 read: IOPS=4682, BW=18.3MiB/s (19.2MB/s)(52.3MiB/2861msec) 00:37:06.645 slat (usec): min=5, max=105, avg= 9.76, stdev= 3.25 00:37:06.645 clat (usec): min=115, max=2131, avg=202.74, stdev=57.73 00:37:06.645 lat (usec): min=124, max=2156, avg=212.51, stdev=57.89 00:37:06.645 clat percentiles (usec): 00:37:06.645 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 151], 00:37:06.645 | 30.00th=[ 157], 40.00th=[ 169], 50.00th=[ 215], 60.00th=[ 225], 00:37:06.645 | 70.00th=[ 233], 80.00th=[ 241], 90.00th=[ 255], 95.00th=[ 281], 00:37:06.645 | 99.00th=[ 388], 99.50th=[ 408], 99.90th=[ 603], 99.95th=[ 676], 00:37:06.645 | 99.99th=[ 1287] 00:37:06.645 bw ( KiB/s): min=18408, max=19520, per=26.82%, avg=19190.40, stdev=448.26, samples=5 00:37:06.645 iops : min= 4602, max= 4880, avg=4797.60, stdev=112.07, samples=5 00:37:06.645 lat (usec) : 250=87.40%, 500=12.47%, 750=0.11% 00:37:06.645 lat (msec) : 2=0.01%, 4=0.01% 00:37:06.645 cpu : usr=0.28%, sys=4.06%, ctx=13400, majf=0, minf=2 00:37:06.645 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:06.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.645 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.645 issued rwts: total=13397,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.645 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:06.645 00:37:06.645 Run status group 0 (all jobs): 00:37:06.645 READ: bw=69.9MiB/s (73.3MB/s), 11.9MiB/s-24.6MiB/s (12.5MB/s-25.8MB/s), io=246MiB (258MB), run=2861-3521msec 00:37:06.645 00:37:06.645 Disk stats (read/write): 00:37:06.645 nvme0n1: ios=19654/0, merge=0/0, ticks=2941/0, in_queue=2941, util=95.04% 00:37:06.645 nvme0n2: ios=9633/0, merge=0/0, ticks=3186/0, in_queue=3186, util=95.25% 00:37:06.645 nvme0n3: ios=17591/0, merge=0/0, ticks=2749/0, in_queue=2749, util=96.62% 00:37:06.645 nvme0n4: ios=12637/0, merge=0/0, ticks=2532/0, in_queue=2532, util=96.41% 00:37:06.645 08:33:39 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:06.645 08:33:39 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:37:06.903 08:33:40 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:06.903 08:33:40 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:37:07.161 08:33:40 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:07.161 08:33:40 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:37:07.420 08:33:40 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:07.420 08:33:40 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:37:07.420 08:33:40 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:07.420 08:33:40 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:37:07.679 08:33:40 -- target/fio.sh@69 -- # fio_status=0 00:37:07.679 08:33:40 -- target/fio.sh@70 -- # wait 75126 00:37:07.679 08:33:40 -- target/fio.sh@70 -- # fio_status=4 00:37:07.679 08:33:40 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:07.938 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:07.938 08:33:41 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:07.938 08:33:41 -- common/autotest_common.sh@1198 -- # local i=0 00:37:07.938 08:33:41 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:37:07.938 08:33:41 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:07.938 08:33:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:07.938 08:33:41 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:37:07.938 08:33:41 -- common/autotest_common.sh@1210 -- # return 0 00:37:07.938 08:33:41 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:37:07.938 08:33:41 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:37:07.938 nvmf hotplug test: fio failed as expected 00:37:07.938 08:33:41 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:08.197 08:33:41 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:37:08.197 08:33:41 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:37:08.197 08:33:41 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:37:08.197 08:33:41 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:37:08.197 08:33:41 -- target/fio.sh@91 -- # nvmftestfini 00:37:08.197 08:33:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:37:08.197 08:33:41 -- nvmf/common.sh@116 -- # sync 00:37:08.197 08:33:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:37:08.197 08:33:41 -- nvmf/common.sh@119 -- # set +e 00:37:08.197 08:33:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:37:08.197 08:33:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:37:08.197 rmmod nvme_tcp 00:37:08.197 rmmod nvme_fabrics 00:37:08.197 rmmod nvme_keyring 00:37:08.197 08:33:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:37:08.197 08:33:41 -- nvmf/common.sh@123 -- # set -e 00:37:08.197 08:33:41 -- nvmf/common.sh@124 -- # return 0 00:37:08.197 08:33:41 -- nvmf/common.sh@477 -- # '[' -n 74643 ']' 00:37:08.197 08:33:41 -- nvmf/common.sh@478 -- # killprocess 74643 00:37:08.197 08:33:41 -- common/autotest_common.sh@926 -- # '[' -z 74643 ']' 00:37:08.197 08:33:41 -- common/autotest_common.sh@930 -- # kill -0 74643 00:37:08.197 08:33:41 -- common/autotest_common.sh@931 -- # uname 00:37:08.197 08:33:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:37:08.197 08:33:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74643 00:37:08.198 killing process with pid 74643 00:37:08.198 08:33:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:37:08.198 08:33:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:37:08.198 08:33:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74643' 00:37:08.198 08:33:41 -- common/autotest_common.sh@945 -- # kill 74643 00:37:08.198 08:33:41 -- common/autotest_common.sh@950 -- # wait 74643 00:37:08.457 08:33:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:37:08.457 08:33:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:37:08.457 08:33:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:37:08.457 08:33:41 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:08.457 08:33:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:37:08.457 08:33:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:08.457 08:33:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:08.457 08:33:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:08.457 08:33:41 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:37:08.457 00:37:08.457 real 0m19.080s 00:37:08.457 user 1m13.751s 00:37:08.457 sys 0m7.707s 00:37:08.457 08:33:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:08.457 08:33:41 -- common/autotest_common.sh@10 -- # set +x 00:37:08.457 ************************************ 00:37:08.457 END TEST nvmf_fio_target 00:37:08.458 ************************************ 00:37:08.718 08:33:41 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:37:08.718 08:33:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:37:08.718 08:33:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:37:08.718 08:33:41 -- common/autotest_common.sh@10 -- # set +x 00:37:08.718 ************************************ 00:37:08.718 START TEST nvmf_bdevio 00:37:08.718 ************************************ 00:37:08.718 08:33:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:37:08.718 * Looking for test storage... 00:37:08.718 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:37:08.718 08:33:41 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:37:08.718 08:33:41 -- nvmf/common.sh@7 -- # uname -s 00:37:08.718 08:33:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:08.718 08:33:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:08.718 08:33:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:08.718 08:33:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:08.718 08:33:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:08.718 08:33:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:08.718 08:33:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:08.718 08:33:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:08.718 08:33:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:08.718 08:33:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:08.718 08:33:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:37:08.718 08:33:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:37:08.718 08:33:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:08.718 08:33:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:08.718 08:33:41 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:37:08.718 08:33:41 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:08.718 08:33:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:08.718 08:33:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:08.718 08:33:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:08.718 08:33:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.718 08:33:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.718 08:33:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.718 08:33:41 -- paths/export.sh@5 -- # export PATH 00:37:08.719 08:33:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.719 08:33:41 -- nvmf/common.sh@46 -- # : 0 00:37:08.719 08:33:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:37:08.719 08:33:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:37:08.719 08:33:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:37:08.719 08:33:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:08.719 08:33:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:08.719 08:33:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:37:08.719 08:33:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:37:08.719 08:33:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:37:08.719 08:33:41 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:08.719 08:33:41 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:08.719 08:33:41 -- target/bdevio.sh@14 -- # nvmftestinit 00:37:08.719 08:33:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:37:08.719 08:33:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:08.719 08:33:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:37:08.719 08:33:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:37:08.719 08:33:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:37:08.719 08:33:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:08.719 08:33:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:08.719 08:33:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:08.719 08:33:41 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:37:08.719 08:33:41 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:37:08.719 08:33:41 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:37:08.719 08:33:41 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:37:08.719 08:33:41 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:37:08.719 08:33:41 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:37:08.719 08:33:41 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:08.719 08:33:41 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:08.719 08:33:41 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:37:08.719 08:33:41 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:37:08.719 08:33:41 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:37:08.719 08:33:41 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:37:08.719 08:33:41 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:37:08.719 08:33:41 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:08.719 08:33:41 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:37:08.719 08:33:41 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:37:08.719 08:33:41 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:37:08.719 08:33:41 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:37:08.719 08:33:41 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:37:08.719 08:33:41 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:37:08.719 Cannot find device "nvmf_tgt_br" 00:37:08.719 08:33:41 -- nvmf/common.sh@154 -- # true 00:37:08.719 08:33:41 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:37:08.719 Cannot find device "nvmf_tgt_br2" 00:37:08.719 08:33:42 -- nvmf/common.sh@155 -- # true 00:37:08.719 08:33:42 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:37:08.719 08:33:42 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:37:08.719 Cannot find device "nvmf_tgt_br" 00:37:08.719 08:33:42 -- nvmf/common.sh@157 -- # true 00:37:08.719 08:33:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:37:08.719 Cannot find device "nvmf_tgt_br2" 00:37:08.978 08:33:42 -- nvmf/common.sh@158 -- # true 00:37:08.978 08:33:42 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:37:08.978 08:33:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:37:08.978 08:33:42 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:08.978 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:08.978 08:33:42 -- nvmf/common.sh@161 -- # true 00:37:08.978 08:33:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:08.978 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:08.978 08:33:42 -- nvmf/common.sh@162 -- # true 00:37:08.979 08:33:42 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:37:08.979 08:33:42 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:37:08.979 08:33:42 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:37:08.979 08:33:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:37:08.979 08:33:42 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:37:08.979 08:33:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:37:08.979 08:33:42 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:37:08.979 08:33:42 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:37:08.979 08:33:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:37:08.979 08:33:42 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:37:08.979 08:33:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:37:08.979 08:33:42 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:37:08.979 08:33:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:37:08.979 08:33:42 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:37:08.979 08:33:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:37:08.979 08:33:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:37:08.979 08:33:42 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:37:08.979 08:33:42 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:37:08.979 08:33:42 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:37:08.979 08:33:42 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:37:08.979 08:33:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:37:08.979 08:33:42 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:37:08.979 08:33:42 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:37:08.979 08:33:42 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:37:08.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:08.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:37:08.979 00:37:08.979 --- 10.0.0.2 ping statistics --- 00:37:08.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:08.979 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:37:08.979 08:33:42 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:37:08.979 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:37:08.979 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:37:08.979 00:37:08.979 --- 10.0.0.3 ping statistics --- 00:37:08.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:08.979 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:37:08.979 08:33:42 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:37:08.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:08.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:37:08.979 00:37:08.979 --- 10.0.0.1 ping statistics --- 00:37:08.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:08.979 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:37:08.979 08:33:42 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:08.979 08:33:42 -- nvmf/common.sh@421 -- # return 0 00:37:08.979 08:33:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:37:08.979 08:33:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:08.979 08:33:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:37:08.979 08:33:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:37:08.979 08:33:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:08.979 08:33:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:37:08.979 08:33:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:37:08.979 08:33:42 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:37:08.979 08:33:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:37:08.979 08:33:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:37:08.979 08:33:42 -- common/autotest_common.sh@10 -- # set +x 00:37:08.979 08:33:42 -- nvmf/common.sh@469 -- # nvmfpid=75481 00:37:08.979 08:33:42 -- nvmf/common.sh@470 -- # waitforlisten 75481 00:37:08.979 08:33:42 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:37:08.979 08:33:42 -- common/autotest_common.sh@819 -- # '[' -z 75481 ']' 00:37:08.979 08:33:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:08.979 08:33:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:37:08.979 08:33:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:08.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:08.979 08:33:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:37:08.979 08:33:42 -- common/autotest_common.sh@10 -- # set +x 00:37:09.237 [2024-04-17 08:33:42.352863] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:37:09.238 [2024-04-17 08:33:42.352925] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:09.238 [2024-04-17 08:33:42.483038] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:09.496 [2024-04-17 08:33:42.576607] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:37:09.496 [2024-04-17 08:33:42.576730] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:09.496 [2024-04-17 08:33:42.576738] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:09.496 [2024-04-17 08:33:42.576743] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:09.496 [2024-04-17 08:33:42.577030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:37:09.496 [2024-04-17 08:33:42.577257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:37:09.496 [2024-04-17 08:33:42.577482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:37:09.496 [2024-04-17 08:33:42.577485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:37:10.063 08:33:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:37:10.063 08:33:43 -- common/autotest_common.sh@852 -- # return 0 00:37:10.063 08:33:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:37:10.063 08:33:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:37:10.063 08:33:43 -- common/autotest_common.sh@10 -- # set +x 00:37:10.063 08:33:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:10.063 08:33:43 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:10.063 08:33:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:10.063 08:33:43 -- common/autotest_common.sh@10 -- # set +x 00:37:10.063 [2024-04-17 08:33:43.278332] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:10.063 08:33:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:10.063 08:33:43 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:10.063 08:33:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:10.063 08:33:43 -- common/autotest_common.sh@10 -- # set +x 00:37:10.063 Malloc0 00:37:10.063 08:33:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:10.063 08:33:43 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:10.063 08:33:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:10.063 08:33:43 -- common/autotest_common.sh@10 -- # set +x 00:37:10.063 08:33:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:10.063 08:33:43 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:10.063 08:33:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:10.063 08:33:43 -- common/autotest_common.sh@10 -- # set +x 00:37:10.063 08:33:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:10.063 08:33:43 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:10.063 08:33:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:10.063 08:33:43 -- common/autotest_common.sh@10 -- # set +x 00:37:10.063 [2024-04-17 08:33:43.342505] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:10.063 08:33:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:10.063 08:33:43 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:37:10.063 08:33:43 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:37:10.063 08:33:43 -- nvmf/common.sh@520 -- # config=() 00:37:10.063 08:33:43 -- nvmf/common.sh@520 -- # local subsystem config 00:37:10.063 08:33:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:37:10.063 08:33:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:37:10.063 { 00:37:10.063 "params": { 00:37:10.063 "name": "Nvme$subsystem", 00:37:10.063 "trtype": "$TEST_TRANSPORT", 00:37:10.063 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:10.063 "adrfam": "ipv4", 00:37:10.063 "trsvcid": "$NVMF_PORT", 00:37:10.063 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:10.063 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:10.063 "hdgst": ${hdgst:-false}, 00:37:10.063 "ddgst": ${ddgst:-false} 00:37:10.063 }, 00:37:10.063 "method": "bdev_nvme_attach_controller" 00:37:10.063 } 00:37:10.063 EOF 00:37:10.063 )") 00:37:10.063 08:33:43 -- nvmf/common.sh@542 -- # cat 00:37:10.063 08:33:43 -- nvmf/common.sh@544 -- # jq . 00:37:10.063 08:33:43 -- nvmf/common.sh@545 -- # IFS=, 00:37:10.063 08:33:43 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:37:10.063 "params": { 00:37:10.063 "name": "Nvme1", 00:37:10.063 "trtype": "tcp", 00:37:10.063 "traddr": "10.0.0.2", 00:37:10.063 "adrfam": "ipv4", 00:37:10.063 "trsvcid": "4420", 00:37:10.063 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:10.063 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:10.063 "hdgst": false, 00:37:10.063 "ddgst": false 00:37:10.063 }, 00:37:10.063 "method": "bdev_nvme_attach_controller" 00:37:10.063 }' 00:37:10.322 [2024-04-17 08:33:43.399739] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:37:10.322 [2024-04-17 08:33:43.399820] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75543 ] 00:37:10.322 [2024-04-17 08:33:43.549664] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:10.581 [2024-04-17 08:33:43.654724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:10.581 [2024-04-17 08:33:43.654923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:37:10.581 [2024-04-17 08:33:43.654978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:10.581 [2024-04-17 08:33:43.805196] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:37:10.581 [2024-04-17 08:33:43.805348] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:37:10.581 I/O targets: 00:37:10.581 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:37:10.581 00:37:10.581 00:37:10.581 CUnit - A unit testing framework for C - Version 2.1-3 00:37:10.581 http://cunit.sourceforge.net/ 00:37:10.581 00:37:10.581 00:37:10.581 Suite: bdevio tests on: Nvme1n1 00:37:10.581 Test: blockdev write read block ...passed 00:37:10.581 Test: blockdev write zeroes read block ...passed 00:37:10.581 Test: blockdev write zeroes read no split ...passed 00:37:10.581 Test: blockdev write zeroes read split ...passed 00:37:10.840 Test: blockdev write zeroes read split partial ...passed 00:37:10.840 Test: blockdev reset ...[2024-04-17 08:33:43.924519] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.840 [2024-04-17 08:33:43.924764] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd3810 (9): Bad file descriptor 00:37:10.840 [2024-04-17 08:33:43.943715] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:37:10.840 passed 00:37:10.840 Test: blockdev write read 8 blocks ...passed 00:37:10.840 Test: blockdev write read size > 128k ...passed 00:37:10.840 Test: blockdev write read invalid size ...passed 00:37:10.840 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:37:10.840 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:37:10.840 Test: blockdev write read max offset ...passed 00:37:10.840 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:37:10.840 Test: blockdev writev readv 8 blocks ...passed 00:37:10.840 Test: blockdev writev readv 30 x 1block ...passed 00:37:10.840 Test: blockdev writev readv block ...passed 00:37:10.840 Test: blockdev writev readv size > 128k ...passed 00:37:10.840 Test: blockdev writev readv size > 128k in two iovs ...passed 00:37:10.840 Test: blockdev comparev and writev ...[2024-04-17 08:33:44.113558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:10.840 [2024-04-17 08:33:44.113618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:10.840 [2024-04-17 08:33:44.113633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:10.840 [2024-04-17 08:33:44.113640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:10.840 [2024-04-17 08:33:44.113932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:10.840 [2024-04-17 08:33:44.113946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:37:10.840 [2024-04-17 08:33:44.113959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:10.840 [2024-04-17 08:33:44.113966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:37:10.840 [2024-04-17 08:33:44.114211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:10.840 [2024-04-17 08:33:44.114225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:37:10.840 [2024-04-17 08:33:44.114239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:10.840 [2024-04-17 08:33:44.114249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:37:10.840 [2024-04-17 08:33:44.114556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:10.840 [2024-04-17 08:33:44.114577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:37:10.840 [2024-04-17 08:33:44.114590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:10.840 [2024-04-17 08:33:44.114597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:37:10.840 passed 00:37:11.104 Test: blockdev nvme passthru rw ...passed 00:37:11.104 Test: blockdev nvme passthru vendor specific ...[2024-04-17 08:33:44.197878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:11.104 [2024-04-17 08:33:44.197930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:37:11.104 [2024-04-17 08:33:44.198049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:11.104 [2024-04-17 08:33:44.198062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:37:11.104 [2024-04-17 08:33:44.198157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:11.104 [2024-04-17 08:33:44.198170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:11.104 [2024-04-17 08:33:44.198273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:11.104 [2024-04-17 08:33:44.198286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:11.104 passed 00:37:11.104 Test: blockdev nvme admin passthru ...passed 00:37:11.104 Test: blockdev copy ...passed 00:37:11.104 00:37:11.104 Run Summary: Type Total Ran Passed Failed Inactive 00:37:11.104 suites 1 1 n/a 0 0 00:37:11.104 tests 23 23 23 0 0 00:37:11.104 asserts 152 152 152 0 n/a 00:37:11.104 00:37:11.104 Elapsed time = 0.907 seconds 00:37:11.366 08:33:44 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:11.366 08:33:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:11.366 08:33:44 -- common/autotest_common.sh@10 -- # set +x 00:37:11.366 08:33:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:11.366 08:33:44 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:37:11.366 08:33:44 -- target/bdevio.sh@30 -- # nvmftestfini 00:37:11.366 08:33:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:37:11.366 08:33:44 -- nvmf/common.sh@116 -- # sync 00:37:11.366 08:33:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:37:11.366 08:33:44 -- nvmf/common.sh@119 -- # set +e 00:37:11.366 08:33:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:37:11.366 08:33:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:37:11.366 rmmod nvme_tcp 00:37:11.366 rmmod nvme_fabrics 00:37:11.366 rmmod nvme_keyring 00:37:11.366 08:33:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:37:11.366 08:33:44 -- nvmf/common.sh@123 -- # set -e 00:37:11.366 08:33:44 -- nvmf/common.sh@124 -- # return 0 00:37:11.366 08:33:44 -- nvmf/common.sh@477 -- # '[' -n 75481 ']' 00:37:11.366 08:33:44 -- nvmf/common.sh@478 -- # killprocess 75481 00:37:11.366 08:33:44 -- common/autotest_common.sh@926 -- # '[' -z 75481 ']' 00:37:11.366 08:33:44 -- common/autotest_common.sh@930 -- # kill -0 75481 00:37:11.366 08:33:44 -- common/autotest_common.sh@931 -- # uname 00:37:11.366 08:33:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:37:11.366 08:33:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 75481 00:37:11.366 08:33:44 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:37:11.366 08:33:44 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:37:11.366 killing process with pid 75481 00:37:11.366 08:33:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 75481' 00:37:11.366 08:33:44 -- common/autotest_common.sh@945 -- # kill 75481 00:37:11.366 08:33:44 -- common/autotest_common.sh@950 -- # wait 75481 00:37:11.625 08:33:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:37:11.625 08:33:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:37:11.625 08:33:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:37:11.625 08:33:44 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:11.625 08:33:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:37:11.625 08:33:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:11.625 08:33:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:11.625 08:33:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:11.625 08:33:44 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:37:11.625 00:37:11.625 real 0m3.085s 00:37:11.625 user 0m10.907s 00:37:11.625 sys 0m0.738s 00:37:11.625 08:33:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:11.625 08:33:44 -- common/autotest_common.sh@10 -- # set +x 00:37:11.625 ************************************ 00:37:11.625 END TEST nvmf_bdevio 00:37:11.625 ************************************ 00:37:11.625 08:33:44 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:37:11.625 08:33:44 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:37:11.625 08:33:44 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:37:11.625 08:33:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:37:11.625 08:33:44 -- common/autotest_common.sh@10 -- # set +x 00:37:11.625 ************************************ 00:37:11.625 START TEST nvmf_bdevio_no_huge 00:37:11.625 ************************************ 00:37:11.625 08:33:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:37:11.884 * Looking for test storage... 00:37:11.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:37:11.884 08:33:45 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:37:11.884 08:33:45 -- nvmf/common.sh@7 -- # uname -s 00:37:11.884 08:33:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:11.884 08:33:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:11.884 08:33:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:11.884 08:33:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:11.884 08:33:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:11.884 08:33:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:11.884 08:33:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:11.884 08:33:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:11.884 08:33:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:11.884 08:33:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:11.884 08:33:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:37:11.884 08:33:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:37:11.884 08:33:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:11.884 08:33:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:11.884 08:33:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:37:11.884 08:33:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:11.884 08:33:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:11.884 08:33:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:11.884 08:33:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:11.884 08:33:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.884 08:33:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.884 08:33:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.884 08:33:45 -- paths/export.sh@5 -- # export PATH 00:37:11.884 08:33:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.884 08:33:45 -- nvmf/common.sh@46 -- # : 0 00:37:11.884 08:33:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:37:11.884 08:33:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:37:11.884 08:33:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:37:11.884 08:33:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:11.884 08:33:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:11.884 08:33:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:37:11.884 08:33:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:37:11.884 08:33:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:37:11.884 08:33:45 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:11.884 08:33:45 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:11.884 08:33:45 -- target/bdevio.sh@14 -- # nvmftestinit 00:37:11.884 08:33:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:37:11.884 08:33:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:11.884 08:33:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:37:11.884 08:33:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:37:11.884 08:33:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:37:11.884 08:33:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:11.884 08:33:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:11.884 08:33:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:11.884 08:33:45 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:37:11.884 08:33:45 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:37:11.884 08:33:45 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:37:11.884 08:33:45 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:37:11.884 08:33:45 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:37:11.884 08:33:45 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:37:11.884 08:33:45 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:11.884 08:33:45 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:11.884 08:33:45 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:37:11.884 08:33:45 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:37:11.884 08:33:45 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:37:11.884 08:33:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:37:11.884 08:33:45 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:37:11.884 08:33:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:11.884 08:33:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:37:11.884 08:33:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:37:11.884 08:33:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:37:11.884 08:33:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:37:11.884 08:33:45 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:37:11.884 08:33:45 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:37:11.884 Cannot find device "nvmf_tgt_br" 00:37:11.885 08:33:45 -- nvmf/common.sh@154 -- # true 00:37:11.885 08:33:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:37:11.885 Cannot find device "nvmf_tgt_br2" 00:37:11.885 08:33:45 -- nvmf/common.sh@155 -- # true 00:37:11.885 08:33:45 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:37:11.885 08:33:45 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:37:11.885 Cannot find device "nvmf_tgt_br" 00:37:11.885 08:33:45 -- nvmf/common.sh@157 -- # true 00:37:11.885 08:33:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:37:11.885 Cannot find device "nvmf_tgt_br2" 00:37:11.885 08:33:45 -- nvmf/common.sh@158 -- # true 00:37:11.885 08:33:45 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:37:11.885 08:33:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:37:12.144 08:33:45 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:12.144 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:12.144 08:33:45 -- nvmf/common.sh@161 -- # true 00:37:12.144 08:33:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:12.144 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:12.144 08:33:45 -- nvmf/common.sh@162 -- # true 00:37:12.144 08:33:45 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:37:12.144 08:33:45 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:37:12.144 08:33:45 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:37:12.144 08:33:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:37:12.144 08:33:45 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:37:12.144 08:33:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:37:12.144 08:33:45 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:37:12.144 08:33:45 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:37:12.144 08:33:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:37:12.144 08:33:45 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:37:12.144 08:33:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:37:12.144 08:33:45 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:37:12.144 08:33:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:37:12.144 08:33:45 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:37:12.144 08:33:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:37:12.144 08:33:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:37:12.144 08:33:45 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:37:12.144 08:33:45 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:37:12.144 08:33:45 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:37:12.144 08:33:45 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:37:12.144 08:33:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:37:12.144 08:33:45 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:37:12.144 08:33:45 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:37:12.144 08:33:45 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:37:12.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:12.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:37:12.144 00:37:12.144 --- 10.0.0.2 ping statistics --- 00:37:12.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:12.144 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:37:12.144 08:33:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:37:12.144 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:37:12.144 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:37:12.144 00:37:12.144 --- 10.0.0.3 ping statistics --- 00:37:12.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:12.144 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:37:12.144 08:33:45 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:37:12.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:12.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:37:12.144 00:37:12.144 --- 10.0.0.1 ping statistics --- 00:37:12.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:12.144 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:37:12.144 08:33:45 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:12.144 08:33:45 -- nvmf/common.sh@421 -- # return 0 00:37:12.144 08:33:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:37:12.144 08:33:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:12.144 08:33:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:37:12.144 08:33:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:37:12.144 08:33:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:12.144 08:33:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:37:12.144 08:33:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:37:12.404 08:33:45 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:37:12.404 08:33:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:37:12.404 08:33:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:37:12.404 08:33:45 -- common/autotest_common.sh@10 -- # set +x 00:37:12.404 08:33:45 -- nvmf/common.sh@469 -- # nvmfpid=75721 00:37:12.404 08:33:45 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:37:12.404 08:33:45 -- nvmf/common.sh@470 -- # waitforlisten 75721 00:37:12.404 08:33:45 -- common/autotest_common.sh@819 -- # '[' -z 75721 ']' 00:37:12.404 08:33:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:12.405 08:33:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:37:12.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:12.405 08:33:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:12.405 08:33:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:37:12.405 08:33:45 -- common/autotest_common.sh@10 -- # set +x 00:37:12.405 [2024-04-17 08:33:45.541846] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:37:12.405 [2024-04-17 08:33:45.541956] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:37:12.405 [2024-04-17 08:33:45.677238] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:12.664 [2024-04-17 08:33:45.788051] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:37:12.664 [2024-04-17 08:33:45.788208] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:12.664 [2024-04-17 08:33:45.788221] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:12.664 [2024-04-17 08:33:45.788230] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:12.664 [2024-04-17 08:33:45.788521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:37:12.664 [2024-04-17 08:33:45.788754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:37:12.664 [2024-04-17 08:33:45.788996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:37:12.664 [2024-04-17 08:33:45.789009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:37:13.232 08:33:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:37:13.232 08:33:46 -- common/autotest_common.sh@852 -- # return 0 00:37:13.232 08:33:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:37:13.232 08:33:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:37:13.232 08:33:46 -- common/autotest_common.sh@10 -- # set +x 00:37:13.232 08:33:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:13.232 08:33:46 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:13.232 08:33:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:13.232 08:33:46 -- common/autotest_common.sh@10 -- # set +x 00:37:13.232 [2024-04-17 08:33:46.449973] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:13.232 08:33:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:13.232 08:33:46 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:13.232 08:33:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:13.232 08:33:46 -- common/autotest_common.sh@10 -- # set +x 00:37:13.232 Malloc0 00:37:13.232 08:33:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:13.232 08:33:46 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:13.232 08:33:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:13.232 08:33:46 -- common/autotest_common.sh@10 -- # set +x 00:37:13.232 08:33:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:13.232 08:33:46 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:13.232 08:33:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:13.232 08:33:46 -- common/autotest_common.sh@10 -- # set +x 00:37:13.232 08:33:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:13.232 08:33:46 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:13.232 08:33:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:13.232 08:33:46 -- common/autotest_common.sh@10 -- # set +x 00:37:13.232 [2024-04-17 08:33:46.487094] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:13.232 08:33:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:13.232 08:33:46 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:37:13.232 08:33:46 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:37:13.232 08:33:46 -- nvmf/common.sh@520 -- # config=() 00:37:13.232 08:33:46 -- nvmf/common.sh@520 -- # local subsystem config 00:37:13.232 08:33:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:37:13.232 08:33:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:37:13.232 { 00:37:13.232 "params": { 00:37:13.232 "name": "Nvme$subsystem", 00:37:13.232 "trtype": "$TEST_TRANSPORT", 00:37:13.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:13.232 "adrfam": "ipv4", 00:37:13.232 "trsvcid": "$NVMF_PORT", 00:37:13.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:13.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:13.232 "hdgst": ${hdgst:-false}, 00:37:13.232 "ddgst": ${ddgst:-false} 00:37:13.232 }, 00:37:13.232 "method": "bdev_nvme_attach_controller" 00:37:13.232 } 00:37:13.232 EOF 00:37:13.232 )") 00:37:13.232 08:33:46 -- nvmf/common.sh@542 -- # cat 00:37:13.232 08:33:46 -- nvmf/common.sh@544 -- # jq . 00:37:13.232 08:33:46 -- nvmf/common.sh@545 -- # IFS=, 00:37:13.232 08:33:46 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:37:13.233 "params": { 00:37:13.233 "name": "Nvme1", 00:37:13.233 "trtype": "tcp", 00:37:13.233 "traddr": "10.0.0.2", 00:37:13.233 "adrfam": "ipv4", 00:37:13.233 "trsvcid": "4420", 00:37:13.233 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:13.233 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:13.233 "hdgst": false, 00:37:13.233 "ddgst": false 00:37:13.233 }, 00:37:13.233 "method": "bdev_nvme_attach_controller" 00:37:13.233 }' 00:37:13.233 [2024-04-17 08:33:46.546930] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:37:13.233 [2024-04-17 08:33:46.547367] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid75775 ] 00:37:13.491 [2024-04-17 08:33:46.681178] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:13.491 [2024-04-17 08:33:46.801818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:13.491 [2024-04-17 08:33:46.801878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:37:13.491 [2024-04-17 08:33:46.801881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:13.750 [2024-04-17 08:33:46.981390] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:37:13.750 [2024-04-17 08:33:46.981448] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:37:13.750 I/O targets: 00:37:13.750 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:37:13.750 00:37:13.750 00:37:13.750 CUnit - A unit testing framework for C - Version 2.1-3 00:37:13.750 http://cunit.sourceforge.net/ 00:37:13.750 00:37:13.750 00:37:13.750 Suite: bdevio tests on: Nvme1n1 00:37:13.750 Test: blockdev write read block ...passed 00:37:13.750 Test: blockdev write zeroes read block ...passed 00:37:13.750 Test: blockdev write zeroes read no split ...passed 00:37:14.009 Test: blockdev write zeroes read split ...passed 00:37:14.009 Test: blockdev write zeroes read split partial ...passed 00:37:14.009 Test: blockdev reset ...[2024-04-17 08:33:47.119672] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.009 [2024-04-17 08:33:47.119798] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1263ba0 (9): Bad file descriptor 00:37:14.009 passed 00:37:14.009 Test: blockdev write read 8 blocks ...passed 00:37:14.009 Test: blockdev write read size > 128k ...[2024-04-17 08:33:47.139451] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:37:14.009 passed 00:37:14.009 Test: blockdev write read invalid size ...passed 00:37:14.009 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:37:14.009 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:37:14.009 Test: blockdev write read max offset ...passed 00:37:14.009 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:37:14.009 Test: blockdev writev readv 8 blocks ...passed 00:37:14.009 Test: blockdev writev readv 30 x 1block ...passed 00:37:14.009 Test: blockdev writev readv block ...passed 00:37:14.009 Test: blockdev writev readv size > 128k ...passed 00:37:14.009 Test: blockdev writev readv size > 128k in two iovs ...passed 00:37:14.009 Test: blockdev comparev and writev ...[2024-04-17 08:33:47.311372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:14.009 [2024-04-17 08:33:47.311435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:14.009 [2024-04-17 08:33:47.311451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:14.009 [2024-04-17 08:33:47.311459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:14.009 [2024-04-17 08:33:47.311725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:14.009 [2024-04-17 08:33:47.311735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:37:14.010 [2024-04-17 08:33:47.311747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:14.010 [2024-04-17 08:33:47.311754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:37:14.010 [2024-04-17 08:33:47.311998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:14.010 [2024-04-17 08:33:47.312008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:37:14.010 [2024-04-17 08:33:47.312020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:14.010 [2024-04-17 08:33:47.312027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:37:14.010 [2024-04-17 08:33:47.312262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:14.010 [2024-04-17 08:33:47.312276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:37:14.010 [2024-04-17 08:33:47.312291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:14.010 [2024-04-17 08:33:47.312299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:37:14.268 passed 00:37:14.268 Test: blockdev nvme passthru rw ...passed 00:37:14.268 Test: blockdev nvme passthru vendor specific ...[2024-04-17 08:33:47.394779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:14.268 [2024-04-17 08:33:47.394815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:37:14.268 [2024-04-17 08:33:47.394942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:14.268 [2024-04-17 08:33:47.394956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:37:14.268 [2024-04-17 08:33:47.395045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:14.268 [2024-04-17 08:33:47.395058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:14.268 [2024-04-17 08:33:47.395149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:14.268 [2024-04-17 08:33:47.395161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:14.268 passed 00:37:14.268 Test: blockdev nvme admin passthru ...passed 00:37:14.268 Test: blockdev copy ...passed 00:37:14.268 00:37:14.268 Run Summary: Type Total Ran Passed Failed Inactive 00:37:14.268 suites 1 1 n/a 0 0 00:37:14.268 tests 23 23 23 0 0 00:37:14.268 asserts 152 152 152 0 n/a 00:37:14.268 00:37:14.268 Elapsed time = 0.955 seconds 00:37:14.526 08:33:47 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:14.526 08:33:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:14.526 08:33:47 -- common/autotest_common.sh@10 -- # set +x 00:37:14.784 08:33:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:14.784 08:33:47 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:37:14.784 08:33:47 -- target/bdevio.sh@30 -- # nvmftestfini 00:37:14.784 08:33:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:37:14.784 08:33:47 -- nvmf/common.sh@116 -- # sync 00:37:14.784 08:33:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:37:14.784 08:33:47 -- nvmf/common.sh@119 -- # set +e 00:37:14.784 08:33:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:37:14.784 08:33:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:37:14.784 rmmod nvme_tcp 00:37:14.784 rmmod nvme_fabrics 00:37:14.784 rmmod nvme_keyring 00:37:14.784 08:33:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:37:14.784 08:33:47 -- nvmf/common.sh@123 -- # set -e 00:37:14.784 08:33:47 -- nvmf/common.sh@124 -- # return 0 00:37:14.784 08:33:47 -- nvmf/common.sh@477 -- # '[' -n 75721 ']' 00:37:14.784 08:33:47 -- nvmf/common.sh@478 -- # killprocess 75721 00:37:14.784 08:33:47 -- common/autotest_common.sh@926 -- # '[' -z 75721 ']' 00:37:14.784 08:33:47 -- common/autotest_common.sh@930 -- # kill -0 75721 00:37:14.784 08:33:47 -- common/autotest_common.sh@931 -- # uname 00:37:14.784 08:33:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:37:14.784 08:33:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 75721 00:37:14.784 08:33:47 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:37:14.784 08:33:47 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:37:14.784 killing process with pid 75721 00:37:14.784 08:33:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 75721' 00:37:14.784 08:33:47 -- common/autotest_common.sh@945 -- # kill 75721 00:37:14.784 08:33:47 -- common/autotest_common.sh@950 -- # wait 75721 00:37:15.042 08:33:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:37:15.042 08:33:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:37:15.042 08:33:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:37:15.042 08:33:48 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:15.042 08:33:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:37:15.042 08:33:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:15.042 08:33:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:15.042 08:33:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:15.301 08:33:48 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:37:15.301 00:37:15.301 real 0m3.460s 00:37:15.301 user 0m12.159s 00:37:15.301 sys 0m1.358s 00:37:15.301 08:33:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:15.301 08:33:48 -- common/autotest_common.sh@10 -- # set +x 00:37:15.301 ************************************ 00:37:15.301 END TEST nvmf_bdevio_no_huge 00:37:15.301 ************************************ 00:37:15.301 08:33:48 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:37:15.301 08:33:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:37:15.301 08:33:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:37:15.301 08:33:48 -- common/autotest_common.sh@10 -- # set +x 00:37:15.301 ************************************ 00:37:15.301 START TEST nvmf_tls 00:37:15.301 ************************************ 00:37:15.301 08:33:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:37:15.301 * Looking for test storage... 00:37:15.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:37:15.301 08:33:48 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:37:15.301 08:33:48 -- nvmf/common.sh@7 -- # uname -s 00:37:15.301 08:33:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:15.301 08:33:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:15.301 08:33:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:15.301 08:33:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:15.301 08:33:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:15.301 08:33:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:15.301 08:33:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:15.301 08:33:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:15.301 08:33:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:15.301 08:33:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:15.301 08:33:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:37:15.301 08:33:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:37:15.301 08:33:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:15.301 08:33:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:15.301 08:33:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:37:15.301 08:33:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:15.301 08:33:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:15.301 08:33:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:15.301 08:33:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:15.301 08:33:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:15.301 08:33:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:15.301 08:33:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:15.301 08:33:48 -- paths/export.sh@5 -- # export PATH 00:37:15.301 08:33:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:15.301 08:33:48 -- nvmf/common.sh@46 -- # : 0 00:37:15.301 08:33:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:37:15.301 08:33:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:37:15.301 08:33:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:37:15.301 08:33:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:15.301 08:33:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:15.301 08:33:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:37:15.301 08:33:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:37:15.301 08:33:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:37:15.301 08:33:48 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:15.301 08:33:48 -- target/tls.sh@71 -- # nvmftestinit 00:37:15.301 08:33:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:37:15.301 08:33:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:15.301 08:33:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:37:15.301 08:33:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:37:15.301 08:33:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:37:15.301 08:33:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:15.301 08:33:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:15.301 08:33:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:15.301 08:33:48 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:37:15.301 08:33:48 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:37:15.301 08:33:48 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:37:15.301 08:33:48 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:37:15.301 08:33:48 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:37:15.301 08:33:48 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:37:15.301 08:33:48 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:15.301 08:33:48 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:15.301 08:33:48 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:37:15.301 08:33:48 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:37:15.301 08:33:48 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:37:15.301 08:33:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:37:15.301 08:33:48 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:37:15.301 08:33:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:15.301 08:33:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:37:15.301 08:33:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:37:15.301 08:33:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:37:15.301 08:33:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:37:15.301 08:33:48 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:37:15.301 08:33:48 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:37:15.301 Cannot find device "nvmf_tgt_br" 00:37:15.301 08:33:48 -- nvmf/common.sh@154 -- # true 00:37:15.301 08:33:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:37:15.561 Cannot find device "nvmf_tgt_br2" 00:37:15.561 08:33:48 -- nvmf/common.sh@155 -- # true 00:37:15.561 08:33:48 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:37:15.561 08:33:48 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:37:15.561 Cannot find device "nvmf_tgt_br" 00:37:15.561 08:33:48 -- nvmf/common.sh@157 -- # true 00:37:15.561 08:33:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:37:15.561 Cannot find device "nvmf_tgt_br2" 00:37:15.561 08:33:48 -- nvmf/common.sh@158 -- # true 00:37:15.561 08:33:48 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:37:15.561 08:33:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:37:15.561 08:33:48 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:15.561 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:15.561 08:33:48 -- nvmf/common.sh@161 -- # true 00:37:15.561 08:33:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:15.561 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:15.561 08:33:48 -- nvmf/common.sh@162 -- # true 00:37:15.561 08:33:48 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:37:15.561 08:33:48 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:37:15.561 08:33:48 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:37:15.561 08:33:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:37:15.561 08:33:48 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:37:15.561 08:33:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:37:15.561 08:33:48 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:37:15.561 08:33:48 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:37:15.561 08:33:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:37:15.561 08:33:48 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:37:15.561 08:33:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:37:15.561 08:33:48 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:37:15.561 08:33:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:37:15.561 08:33:48 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:37:15.561 08:33:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:37:15.561 08:33:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:37:15.561 08:33:48 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:37:15.561 08:33:48 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:37:15.561 08:33:48 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:37:15.820 08:33:48 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:37:15.820 08:33:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:37:15.820 08:33:48 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:37:15.820 08:33:48 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:37:15.820 08:33:48 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:37:15.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:15.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:37:15.820 00:37:15.820 --- 10.0.0.2 ping statistics --- 00:37:15.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:15.820 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:37:15.820 08:33:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:37:15.820 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:37:15.820 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:37:15.820 00:37:15.820 --- 10.0.0.3 ping statistics --- 00:37:15.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:15.820 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:37:15.820 08:33:48 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:37:15.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:15.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:37:15.820 00:37:15.820 --- 10.0.0.1 ping statistics --- 00:37:15.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:15.820 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:37:15.820 08:33:48 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:15.820 08:33:48 -- nvmf/common.sh@421 -- # return 0 00:37:15.820 08:33:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:37:15.820 08:33:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:15.820 08:33:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:37:15.820 08:33:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:37:15.820 08:33:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:15.820 08:33:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:37:15.820 08:33:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:37:15.820 08:33:48 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:37:15.820 08:33:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:37:15.820 08:33:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:37:15.820 08:33:48 -- common/autotest_common.sh@10 -- # set +x 00:37:15.820 08:33:48 -- nvmf/common.sh@469 -- # nvmfpid=75965 00:37:15.820 08:33:48 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:37:15.820 08:33:48 -- nvmf/common.sh@470 -- # waitforlisten 75965 00:37:15.820 08:33:48 -- common/autotest_common.sh@819 -- # '[' -z 75965 ']' 00:37:15.820 08:33:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:15.820 08:33:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:37:15.820 08:33:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:15.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:15.820 08:33:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:37:15.820 08:33:49 -- common/autotest_common.sh@10 -- # set +x 00:37:15.820 [2024-04-17 08:33:49.052978] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:37:15.820 [2024-04-17 08:33:49.053055] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:16.080 [2024-04-17 08:33:49.194977] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:16.080 [2024-04-17 08:33:49.297736] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:37:16.080 [2024-04-17 08:33:49.297873] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:16.080 [2024-04-17 08:33:49.297880] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:16.080 [2024-04-17 08:33:49.297886] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:16.080 [2024-04-17 08:33:49.297906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:16.649 08:33:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:37:16.649 08:33:49 -- common/autotest_common.sh@852 -- # return 0 00:37:16.649 08:33:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:37:16.649 08:33:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:37:16.649 08:33:49 -- common/autotest_common.sh@10 -- # set +x 00:37:16.649 08:33:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:16.649 08:33:49 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:37:16.649 08:33:49 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:37:16.909 true 00:37:16.909 08:33:50 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:37:16.909 08:33:50 -- target/tls.sh@82 -- # jq -r .tls_version 00:37:17.168 08:33:50 -- target/tls.sh@82 -- # version=0 00:37:17.168 08:33:50 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:37:17.168 08:33:50 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:37:17.428 08:33:50 -- target/tls.sh@90 -- # jq -r .tls_version 00:37:17.428 08:33:50 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:37:17.687 08:33:50 -- target/tls.sh@90 -- # version=13 00:37:17.687 08:33:50 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:37:17.687 08:33:50 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:37:17.687 08:33:50 -- target/tls.sh@98 -- # jq -r .tls_version 00:37:17.687 08:33:50 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:37:17.947 08:33:51 -- target/tls.sh@98 -- # version=7 00:37:17.947 08:33:51 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:37:17.947 08:33:51 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:37:17.947 08:33:51 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:37:18.206 08:33:51 -- target/tls.sh@105 -- # ktls=false 00:37:18.206 08:33:51 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:37:18.206 08:33:51 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:37:18.465 08:33:51 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:37:18.465 08:33:51 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:37:18.465 08:33:51 -- target/tls.sh@113 -- # ktls=true 00:37:18.465 08:33:51 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:37:18.465 08:33:51 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:37:18.724 08:33:51 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:37:18.724 08:33:51 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:37:18.984 08:33:52 -- target/tls.sh@121 -- # ktls=false 00:37:18.984 08:33:52 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:37:18.984 08:33:52 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:37:18.984 08:33:52 -- target/tls.sh@49 -- # local key hash crc 00:37:18.984 08:33:52 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:37:18.984 08:33:52 -- target/tls.sh@51 -- # hash=01 00:37:18.984 08:33:52 -- target/tls.sh@52 -- # gzip -1 -c 00:37:18.984 08:33:52 -- target/tls.sh@52 -- # tail -c8 00:37:18.984 08:33:52 -- target/tls.sh@52 -- # head -c 4 00:37:18.984 08:33:52 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:37:18.984 08:33:52 -- target/tls.sh@52 -- # crc='p$H�' 00:37:18.984 08:33:52 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:37:18.984 08:33:52 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:37:18.984 08:33:52 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:37:18.984 08:33:52 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:37:18.984 08:33:52 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:37:18.984 08:33:52 -- target/tls.sh@49 -- # local key hash crc 00:37:18.984 08:33:52 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:37:18.984 08:33:52 -- target/tls.sh@51 -- # hash=01 00:37:18.984 08:33:52 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:37:18.984 08:33:52 -- target/tls.sh@52 -- # tail -c8 00:37:18.984 08:33:52 -- target/tls.sh@52 -- # gzip -1 -c 00:37:18.984 08:33:52 -- target/tls.sh@52 -- # head -c 4 00:37:18.984 08:33:52 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:37:18.984 08:33:52 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:37:18.984 08:33:52 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:37:18.984 08:33:52 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:37:18.984 08:33:52 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:37:18.984 08:33:52 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:37:18.984 08:33:52 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:37:18.984 08:33:52 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:37:18.984 08:33:52 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:37:18.984 08:33:52 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:37:18.984 08:33:52 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:37:18.984 08:33:52 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:37:19.243 08:33:52 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:37:19.502 08:33:52 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:37:19.502 08:33:52 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:37:19.502 08:33:52 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:37:19.761 [2024-04-17 08:33:52.902956] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:19.761 08:33:52 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:37:20.021 08:33:53 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:37:20.021 [2024-04-17 08:33:53.318223] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:20.021 [2024-04-17 08:33:53.318428] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:20.021 08:33:53 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:37:20.280 malloc0 00:37:20.280 08:33:53 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:37:20.539 08:33:53 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:37:20.799 08:33:53 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:37:33.009 Initializing NVMe Controllers 00:37:33.009 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:33.009 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:37:33.009 Initialization complete. Launching workers. 00:37:33.009 ======================================================== 00:37:33.009 Latency(us) 00:37:33.009 Device Information : IOPS MiB/s Average min max 00:37:33.009 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14233.05 55.60 4497.18 1000.91 5344.39 00:37:33.009 ======================================================== 00:37:33.009 Total : 14233.05 55.60 4497.18 1000.91 5344.39 00:37:33.009 00:37:33.009 08:34:04 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:37:33.009 08:34:04 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:37:33.009 08:34:04 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:37:33.009 08:34:04 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:37:33.009 08:34:04 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:37:33.009 08:34:04 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:33.009 08:34:04 -- target/tls.sh@28 -- # bdevperf_pid=76328 00:37:33.009 08:34:04 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:37:33.009 08:34:04 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:33.009 08:34:04 -- target/tls.sh@31 -- # waitforlisten 76328 /var/tmp/bdevperf.sock 00:37:33.009 08:34:04 -- common/autotest_common.sh@819 -- # '[' -z 76328 ']' 00:37:33.009 08:34:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:33.009 08:34:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:37:33.009 08:34:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:33.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:33.009 08:34:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:37:33.009 08:34:04 -- common/autotest_common.sh@10 -- # set +x 00:37:33.009 [2024-04-17 08:34:04.183429] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:37:33.009 [2024-04-17 08:34:04.183490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76328 ] 00:37:33.009 [2024-04-17 08:34:04.321234] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:33.009 [2024-04-17 08:34:04.412142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:37:33.009 08:34:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:37:33.009 08:34:05 -- common/autotest_common.sh@852 -- # return 0 00:37:33.009 08:34:05 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:37:33.009 [2024-04-17 08:34:05.218123] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:33.009 TLSTESTn1 00:37:33.009 08:34:05 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:37:33.009 Running I/O for 10 seconds... 00:37:42.989 00:37:42.989 Latency(us) 00:37:42.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:42.989 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:37:42.989 Verification LBA range: start 0x0 length 0x2000 00:37:42.989 TLSTESTn1 : 10.01 8123.47 31.73 0.00 0.00 15733.49 3076.47 20948.63 00:37:42.989 =================================================================================================================== 00:37:42.989 Total : 8123.47 31.73 0.00 0.00 15733.49 3076.47 20948.63 00:37:42.989 0 00:37:42.989 08:34:15 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:42.989 08:34:15 -- target/tls.sh@45 -- # killprocess 76328 00:37:42.989 08:34:15 -- common/autotest_common.sh@926 -- # '[' -z 76328 ']' 00:37:42.989 08:34:15 -- common/autotest_common.sh@930 -- # kill -0 76328 00:37:42.989 08:34:15 -- common/autotest_common.sh@931 -- # uname 00:37:42.989 08:34:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:37:42.989 08:34:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76328 00:37:42.989 08:34:15 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:37:42.989 08:34:15 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:37:42.989 killing process with pid 76328 00:37:42.989 08:34:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76328' 00:37:42.989 08:34:15 -- common/autotest_common.sh@945 -- # kill 76328 00:37:42.989 Received shutdown signal, test time was about 10.000000 seconds 00:37:42.989 00:37:42.989 Latency(us) 00:37:42.990 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:42.990 =================================================================================================================== 00:37:42.990 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:42.990 08:34:15 -- common/autotest_common.sh@950 -- # wait 76328 00:37:42.990 08:34:15 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:37:42.990 08:34:15 -- common/autotest_common.sh@640 -- # local es=0 00:37:42.990 08:34:15 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:37:42.990 08:34:15 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:37:42.990 08:34:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:37:42.990 08:34:15 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:37:42.990 08:34:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:37:42.990 08:34:15 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:37:42.990 08:34:15 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:37:42.990 08:34:15 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:37:42.990 08:34:15 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:37:42.990 08:34:15 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:37:42.990 08:34:15 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:42.990 08:34:15 -- target/tls.sh@28 -- # bdevperf_pid=76475 00:37:42.990 08:34:15 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:42.990 08:34:15 -- target/tls.sh@31 -- # waitforlisten 76475 /var/tmp/bdevperf.sock 00:37:42.990 08:34:15 -- common/autotest_common.sh@819 -- # '[' -z 76475 ']' 00:37:42.990 08:34:15 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:37:42.990 08:34:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:42.990 08:34:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:37:42.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:42.990 08:34:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:42.990 08:34:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:37:42.990 08:34:15 -- common/autotest_common.sh@10 -- # set +x 00:37:42.990 [2024-04-17 08:34:15.771242] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:37:42.990 [2024-04-17 08:34:15.771334] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76475 ] 00:37:42.990 [2024-04-17 08:34:15.896894] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:42.990 [2024-04-17 08:34:16.000868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:37:43.558 08:34:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:37:43.559 08:34:16 -- common/autotest_common.sh@852 -- # return 0 00:37:43.559 08:34:16 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:37:43.559 [2024-04-17 08:34:16.854894] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:43.559 [2024-04-17 08:34:16.865470] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:43.559 [2024-04-17 08:34:16.866236] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ec570 (107): Transport endpoint is not connected 00:37:43.559 [2024-04-17 08:34:16.867222] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ec570 (9): Bad file descriptor 00:37:43.559 [2024-04-17 08:34:16.868218] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:43.559 [2024-04-17 08:34:16.868244] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:37:43.559 [2024-04-17 08:34:16.868253] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:43.559 2024/04/17 08:34:16 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:37:43.559 request: 00:37:43.559 { 00:37:43.559 "method": "bdev_nvme_attach_controller", 00:37:43.559 "params": { 00:37:43.559 "name": "TLSTEST", 00:37:43.559 "trtype": "tcp", 00:37:43.559 "traddr": "10.0.0.2", 00:37:43.559 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:43.559 "adrfam": "ipv4", 00:37:43.559 "trsvcid": "4420", 00:37:43.559 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:43.559 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt" 00:37:43.559 } 00:37:43.559 } 00:37:43.559 Got JSON-RPC error response 00:37:43.559 GoRPCClient: error on JSON-RPC call 00:37:43.818 08:34:16 -- target/tls.sh@36 -- # killprocess 76475 00:37:43.818 08:34:16 -- common/autotest_common.sh@926 -- # '[' -z 76475 ']' 00:37:43.818 08:34:16 -- common/autotest_common.sh@930 -- # kill -0 76475 00:37:43.818 08:34:16 -- common/autotest_common.sh@931 -- # uname 00:37:43.818 08:34:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:37:43.818 08:34:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76475 00:37:43.818 08:34:16 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:37:43.818 08:34:16 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:37:43.818 08:34:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76475' 00:37:43.818 killing process with pid 76475 00:37:43.818 08:34:16 -- common/autotest_common.sh@945 -- # kill 76475 00:37:43.818 Received shutdown signal, test time was about 10.000000 seconds 00:37:43.818 00:37:43.818 Latency(us) 00:37:43.818 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:43.818 =================================================================================================================== 00:37:43.818 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:43.818 08:34:16 -- common/autotest_common.sh@950 -- # wait 76475 00:37:43.818 08:34:17 -- target/tls.sh@37 -- # return 1 00:37:43.818 08:34:17 -- common/autotest_common.sh@643 -- # es=1 00:37:43.818 08:34:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:37:43.818 08:34:17 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:37:43.818 08:34:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:37:43.818 08:34:17 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:37:43.818 08:34:17 -- common/autotest_common.sh@640 -- # local es=0 00:37:43.818 08:34:17 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:37:43.818 08:34:17 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:37:43.818 08:34:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:37:43.818 08:34:17 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:37:43.818 08:34:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:37:43.818 08:34:17 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:37:43.818 08:34:17 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:37:43.818 08:34:17 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:37:43.818 08:34:17 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:37:43.818 08:34:17 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:37:43.818 08:34:17 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:43.818 08:34:17 -- target/tls.sh@28 -- # bdevperf_pid=76515 00:37:43.818 08:34:17 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:37:44.078 08:34:17 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:44.078 08:34:17 -- target/tls.sh@31 -- # waitforlisten 76515 /var/tmp/bdevperf.sock 00:37:44.078 08:34:17 -- common/autotest_common.sh@819 -- # '[' -z 76515 ']' 00:37:44.078 08:34:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:44.078 08:34:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:37:44.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:44.078 08:34:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:44.078 08:34:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:37:44.078 08:34:17 -- common/autotest_common.sh@10 -- # set +x 00:37:44.078 [2024-04-17 08:34:17.199610] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:37:44.078 [2024-04-17 08:34:17.199685] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76515 ] 00:37:44.078 [2024-04-17 08:34:17.338604] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:44.336 [2024-04-17 08:34:17.442975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:37:44.904 08:34:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:37:44.904 08:34:18 -- common/autotest_common.sh@852 -- # return 0 00:37:44.904 08:34:18 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:37:45.163 [2024-04-17 08:34:18.302356] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:45.163 [2024-04-17 08:34:18.312817] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:37:45.163 [2024-04-17 08:34:18.312867] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:37:45.163 [2024-04-17 08:34:18.312945] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:45.163 [2024-04-17 08:34:18.313750] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61e570 (107): Transport endpoint is not connected 00:37:45.163 [2024-04-17 08:34:18.314734] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61e570 (9): Bad file descriptor 00:37:45.163 [2024-04-17 08:34:18.315731] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:45.163 [2024-04-17 08:34:18.315751] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:37:45.163 [2024-04-17 08:34:18.315761] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:45.163 2024/04/17 08:34:18 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:37:45.163 request: 00:37:45.163 { 00:37:45.163 "method": "bdev_nvme_attach_controller", 00:37:45.163 "params": { 00:37:45.163 "name": "TLSTEST", 00:37:45.163 "trtype": "tcp", 00:37:45.163 "traddr": "10.0.0.2", 00:37:45.163 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:37:45.163 "adrfam": "ipv4", 00:37:45.163 "trsvcid": "4420", 00:37:45.163 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:45.163 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:37:45.163 } 00:37:45.163 } 00:37:45.163 Got JSON-RPC error response 00:37:45.163 GoRPCClient: error on JSON-RPC call 00:37:45.163 08:34:18 -- target/tls.sh@36 -- # killprocess 76515 00:37:45.163 08:34:18 -- common/autotest_common.sh@926 -- # '[' -z 76515 ']' 00:37:45.163 08:34:18 -- common/autotest_common.sh@930 -- # kill -0 76515 00:37:45.163 08:34:18 -- common/autotest_common.sh@931 -- # uname 00:37:45.163 08:34:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:37:45.163 08:34:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76515 00:37:45.163 08:34:18 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:37:45.163 08:34:18 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:37:45.163 08:34:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76515' 00:37:45.163 killing process with pid 76515 00:37:45.163 08:34:18 -- common/autotest_common.sh@945 -- # kill 76515 00:37:45.163 Received shutdown signal, test time was about 10.000000 seconds 00:37:45.163 00:37:45.163 Latency(us) 00:37:45.163 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:45.163 =================================================================================================================== 00:37:45.163 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:45.163 08:34:18 -- common/autotest_common.sh@950 -- # wait 76515 00:37:45.423 08:34:18 -- target/tls.sh@37 -- # return 1 00:37:45.423 08:34:18 -- common/autotest_common.sh@643 -- # es=1 00:37:45.423 08:34:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:37:45.423 08:34:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:37:45.423 08:34:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:37:45.423 08:34:18 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:37:45.423 08:34:18 -- common/autotest_common.sh@640 -- # local es=0 00:37:45.424 08:34:18 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:37:45.424 08:34:18 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:37:45.424 08:34:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:37:45.424 08:34:18 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:37:45.424 08:34:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:37:45.424 08:34:18 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:37:45.424 08:34:18 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:37:45.683 08:34:18 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:37:45.683 08:34:18 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:37:45.683 08:34:18 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:37:45.683 08:34:18 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:45.683 08:34:18 -- target/tls.sh@28 -- # bdevperf_pid=76561 00:37:45.683 08:34:18 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:37:45.683 08:34:18 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:45.683 08:34:18 -- target/tls.sh@31 -- # waitforlisten 76561 /var/tmp/bdevperf.sock 00:37:45.683 08:34:18 -- common/autotest_common.sh@819 -- # '[' -z 76561 ']' 00:37:45.683 08:34:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:45.683 08:34:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:37:45.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:45.683 08:34:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:45.683 08:34:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:37:45.683 08:34:18 -- common/autotest_common.sh@10 -- # set +x 00:37:45.683 [2024-04-17 08:34:18.811083] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:37:45.683 [2024-04-17 08:34:18.811177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76561 ] 00:37:45.683 [2024-04-17 08:34:18.949764] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:45.942 [2024-04-17 08:34:19.097642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:37:46.511 08:34:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:37:46.511 08:34:19 -- common/autotest_common.sh@852 -- # return 0 00:37:46.511 08:34:19 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:37:46.770 [2024-04-17 08:34:19.853098] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:46.770 [2024-04-17 08:34:19.860003] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:37:46.770 [2024-04-17 08:34:19.860047] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:37:46.770 [2024-04-17 08:34:19.860118] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:46.770 [2024-04-17 08:34:19.860674] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5c570 (107): Transport endpoint is not connected 00:37:46.770 [2024-04-17 08:34:19.861659] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5c570 (9): Bad file descriptor 00:37:46.770 [2024-04-17 08:34:19.862654] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:37:46.770 [2024-04-17 08:34:19.862675] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:37:46.770 [2024-04-17 08:34:19.862686] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:37:46.770 2024/04/17 08:34:19 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:37:46.770 request: 00:37:46.770 { 00:37:46.770 "method": "bdev_nvme_attach_controller", 00:37:46.770 "params": { 00:37:46.770 "name": "TLSTEST", 00:37:46.770 "trtype": "tcp", 00:37:46.770 "traddr": "10.0.0.2", 00:37:46.770 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:46.770 "adrfam": "ipv4", 00:37:46.770 "trsvcid": "4420", 00:37:46.770 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:37:46.770 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:37:46.770 } 00:37:46.770 } 00:37:46.770 Got JSON-RPC error response 00:37:46.770 GoRPCClient: error on JSON-RPC call 00:37:46.770 08:34:19 -- target/tls.sh@36 -- # killprocess 76561 00:37:46.770 08:34:19 -- common/autotest_common.sh@926 -- # '[' -z 76561 ']' 00:37:46.770 08:34:19 -- common/autotest_common.sh@930 -- # kill -0 76561 00:37:46.770 08:34:19 -- common/autotest_common.sh@931 -- # uname 00:37:46.771 08:34:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:37:46.771 08:34:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76561 00:37:46.771 08:34:19 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:37:46.771 08:34:19 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:37:46.771 killing process with pid 76561 00:37:46.771 08:34:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76561' 00:37:46.771 08:34:19 -- common/autotest_common.sh@945 -- # kill 76561 00:37:46.771 Received shutdown signal, test time was about 10.000000 seconds 00:37:46.771 00:37:46.771 Latency(us) 00:37:46.771 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:46.771 =================================================================================================================== 00:37:46.771 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:46.771 08:34:19 -- common/autotest_common.sh@950 -- # wait 76561 00:37:47.030 08:34:20 -- target/tls.sh@37 -- # return 1 00:37:47.030 08:34:20 -- common/autotest_common.sh@643 -- # es=1 00:37:47.030 08:34:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:37:47.030 08:34:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:37:47.030 08:34:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:37:47.030 08:34:20 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:37:47.030 08:34:20 -- common/autotest_common.sh@640 -- # local es=0 00:37:47.030 08:34:20 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:37:47.030 08:34:20 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:37:47.030 08:34:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:37:47.030 08:34:20 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:37:47.030 08:34:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:37:47.030 08:34:20 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:37:47.030 08:34:20 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:37:47.030 08:34:20 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:37:47.030 08:34:20 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:37:47.030 08:34:20 -- target/tls.sh@23 -- # psk= 00:37:47.030 08:34:20 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:47.030 08:34:20 -- target/tls.sh@28 -- # bdevperf_pid=76606 00:37:47.030 08:34:20 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:47.030 08:34:20 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:37:47.030 08:34:20 -- target/tls.sh@31 -- # waitforlisten 76606 /var/tmp/bdevperf.sock 00:37:47.030 08:34:20 -- common/autotest_common.sh@819 -- # '[' -z 76606 ']' 00:37:47.030 08:34:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:47.030 08:34:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:37:47.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:47.030 08:34:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:47.030 08:34:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:37:47.030 08:34:20 -- common/autotest_common.sh@10 -- # set +x 00:37:47.030 [2024-04-17 08:34:20.327667] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:37:47.030 [2024-04-17 08:34:20.327734] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76606 ] 00:37:47.291 [2024-04-17 08:34:20.459615] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:47.291 [2024-04-17 08:34:20.600509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:37:48.227 08:34:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:37:48.227 08:34:21 -- common/autotest_common.sh@852 -- # return 0 00:37:48.227 08:34:21 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:37:48.227 [2024-04-17 08:34:21.407360] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:48.227 [2024-04-17 08:34:21.409339] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cd170 (9): Bad file descriptor 00:37:48.227 [2024-04-17 08:34:21.410333] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:48.227 [2024-04-17 08:34:21.410347] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:37:48.227 [2024-04-17 08:34:21.410367] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:48.227 2024/04/17 08:34:21 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:37:48.227 request: 00:37:48.227 { 00:37:48.227 "method": "bdev_nvme_attach_controller", 00:37:48.227 "params": { 00:37:48.227 "name": "TLSTEST", 00:37:48.227 "trtype": "tcp", 00:37:48.227 "traddr": "10.0.0.2", 00:37:48.227 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:48.227 "adrfam": "ipv4", 00:37:48.227 "trsvcid": "4420", 00:37:48.227 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:37:48.227 } 00:37:48.227 } 00:37:48.227 Got JSON-RPC error response 00:37:48.227 GoRPCClient: error on JSON-RPC call 00:37:48.227 08:34:21 -- target/tls.sh@36 -- # killprocess 76606 00:37:48.227 08:34:21 -- common/autotest_common.sh@926 -- # '[' -z 76606 ']' 00:37:48.227 08:34:21 -- common/autotest_common.sh@930 -- # kill -0 76606 00:37:48.227 08:34:21 -- common/autotest_common.sh@931 -- # uname 00:37:48.227 08:34:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:37:48.227 08:34:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76606 00:37:48.227 08:34:21 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:37:48.227 08:34:21 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:37:48.227 killing process with pid 76606 00:37:48.227 Received shutdown signal, test time was about 10.000000 seconds 00:37:48.227 00:37:48.227 Latency(us) 00:37:48.227 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:48.227 =================================================================================================================== 00:37:48.227 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:48.227 08:34:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76606' 00:37:48.227 08:34:21 -- common/autotest_common.sh@945 -- # kill 76606 00:37:48.227 08:34:21 -- common/autotest_common.sh@950 -- # wait 76606 00:37:48.486 08:34:21 -- target/tls.sh@37 -- # return 1 00:37:48.486 08:34:21 -- common/autotest_common.sh@643 -- # es=1 00:37:48.486 08:34:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:37:48.486 08:34:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:37:48.486 08:34:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:37:48.486 08:34:21 -- target/tls.sh@167 -- # killprocess 75965 00:37:48.486 08:34:21 -- common/autotest_common.sh@926 -- # '[' -z 75965 ']' 00:37:48.486 08:34:21 -- common/autotest_common.sh@930 -- # kill -0 75965 00:37:48.486 08:34:21 -- common/autotest_common.sh@931 -- # uname 00:37:48.486 08:34:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:37:48.486 08:34:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 75965 00:37:48.486 08:34:21 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:37:48.486 08:34:21 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:37:48.486 08:34:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 75965' 00:37:48.486 killing process with pid 75965 00:37:48.486 08:34:21 -- common/autotest_common.sh@945 -- # kill 75965 00:37:48.486 08:34:21 -- common/autotest_common.sh@950 -- # wait 75965 00:37:49.056 08:34:22 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:37:49.056 08:34:22 -- target/tls.sh@49 -- # local key hash crc 00:37:49.056 08:34:22 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:37:49.056 08:34:22 -- target/tls.sh@51 -- # hash=02 00:37:49.056 08:34:22 -- target/tls.sh@52 -- # tail -c8 00:37:49.056 08:34:22 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:37:49.056 08:34:22 -- target/tls.sh@52 -- # gzip -1 -c 00:37:49.056 08:34:22 -- target/tls.sh@52 -- # head -c 4 00:37:49.056 08:34:22 -- target/tls.sh@52 -- # crc='�e�'\''' 00:37:49.056 08:34:22 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:37:49.056 08:34:22 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:37:49.056 08:34:22 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:37:49.056 08:34:22 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:37:49.056 08:34:22 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:37:49.056 08:34:22 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:37:49.056 08:34:22 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:37:49.056 08:34:22 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:37:49.056 08:34:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:37:49.056 08:34:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:37:49.056 08:34:22 -- common/autotest_common.sh@10 -- # set +x 00:37:49.056 08:34:22 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:37:49.056 08:34:22 -- nvmf/common.sh@469 -- # nvmfpid=76669 00:37:49.056 08:34:22 -- nvmf/common.sh@470 -- # waitforlisten 76669 00:37:49.056 08:34:22 -- common/autotest_common.sh@819 -- # '[' -z 76669 ']' 00:37:49.056 08:34:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:49.056 08:34:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:37:49.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:49.056 08:34:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:49.056 08:34:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:37:49.056 08:34:22 -- common/autotest_common.sh@10 -- # set +x 00:37:49.056 [2024-04-17 08:34:22.174686] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:37:49.056 [2024-04-17 08:34:22.174776] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:49.056 [2024-04-17 08:34:22.320712] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:49.315 [2024-04-17 08:34:22.463479] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:37:49.315 [2024-04-17 08:34:22.463614] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:49.315 [2024-04-17 08:34:22.463622] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:49.315 [2024-04-17 08:34:22.463628] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:49.315 [2024-04-17 08:34:22.463655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:49.929 08:34:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:37:49.929 08:34:23 -- common/autotest_common.sh@852 -- # return 0 00:37:49.929 08:34:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:37:49.929 08:34:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:37:49.929 08:34:23 -- common/autotest_common.sh@10 -- # set +x 00:37:49.929 08:34:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:49.929 08:34:23 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:37:49.929 08:34:23 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:37:49.929 08:34:23 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:37:50.189 [2024-04-17 08:34:23.348116] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:50.189 08:34:23 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:37:50.450 08:34:23 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:37:50.709 [2024-04-17 08:34:23.847227] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:50.709 [2024-04-17 08:34:23.847421] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:50.709 08:34:23 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:37:50.968 malloc0 00:37:50.968 08:34:24 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:37:51.226 08:34:24 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:37:51.226 08:34:24 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:37:51.226 08:34:24 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:37:51.226 08:34:24 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:37:51.226 08:34:24 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:37:51.226 08:34:24 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:37:51.226 08:34:24 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:51.226 08:34:24 -- target/tls.sh@28 -- # bdevperf_pid=76772 00:37:51.226 08:34:24 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:51.226 08:34:24 -- target/tls.sh@31 -- # waitforlisten 76772 /var/tmp/bdevperf.sock 00:37:51.226 08:34:24 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:37:51.226 08:34:24 -- common/autotest_common.sh@819 -- # '[' -z 76772 ']' 00:37:51.226 08:34:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:51.226 08:34:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:37:51.226 08:34:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:51.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:51.226 08:34:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:37:51.226 08:34:24 -- common/autotest_common.sh@10 -- # set +x 00:37:51.486 [2024-04-17 08:34:24.585618] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:37:51.486 [2024-04-17 08:34:24.585702] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76772 ] 00:37:51.486 [2024-04-17 08:34:24.723002] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:51.746 [2024-04-17 08:34:24.828541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:37:52.317 08:34:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:37:52.317 08:34:25 -- common/autotest_common.sh@852 -- # return 0 00:37:52.317 08:34:25 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:37:52.577 [2024-04-17 08:34:25.689940] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:52.577 TLSTESTn1 00:37:52.577 08:34:25 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:37:52.577 Running I/O for 10 seconds... 00:38:02.637 00:38:02.637 Latency(us) 00:38:02.637 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:02.637 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:38:02.637 Verification LBA range: start 0x0 length 0x2000 00:38:02.637 TLSTESTn1 : 10.01 7346.03 28.70 0.00 0.00 17398.54 3176.64 20490.73 00:38:02.637 =================================================================================================================== 00:38:02.637 Total : 7346.03 28.70 0.00 0.00 17398.54 3176.64 20490.73 00:38:02.637 0 00:38:02.637 08:34:35 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:02.637 08:34:35 -- target/tls.sh@45 -- # killprocess 76772 00:38:02.637 08:34:35 -- common/autotest_common.sh@926 -- # '[' -z 76772 ']' 00:38:02.637 08:34:35 -- common/autotest_common.sh@930 -- # kill -0 76772 00:38:02.637 08:34:35 -- common/autotest_common.sh@931 -- # uname 00:38:02.637 08:34:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:38:02.637 08:34:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76772 00:38:02.637 08:34:35 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:38:02.637 08:34:35 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:38:02.637 killing process with pid 76772 00:38:02.637 08:34:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76772' 00:38:02.637 08:34:35 -- common/autotest_common.sh@945 -- # kill 76772 00:38:02.637 Received shutdown signal, test time was about 10.000000 seconds 00:38:02.637 00:38:02.637 Latency(us) 00:38:02.637 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:02.637 =================================================================================================================== 00:38:02.637 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:02.637 08:34:35 -- common/autotest_common.sh@950 -- # wait 76772 00:38:02.896 08:34:36 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:38:02.896 08:34:36 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:38:02.896 08:34:36 -- common/autotest_common.sh@640 -- # local es=0 00:38:02.896 08:34:36 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:38:02.896 08:34:36 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:38:02.896 08:34:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:38:02.896 08:34:36 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:38:02.896 08:34:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:38:02.896 08:34:36 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:38:02.896 08:34:36 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:38:02.896 08:34:36 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:38:02.896 08:34:36 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:38:02.896 08:34:36 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:38:02.896 08:34:36 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:02.896 08:34:36 -- target/tls.sh@28 -- # bdevperf_pid=76919 00:38:02.896 08:34:36 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:38:02.896 08:34:36 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:38:02.896 08:34:36 -- target/tls.sh@31 -- # waitforlisten 76919 /var/tmp/bdevperf.sock 00:38:02.896 08:34:36 -- common/autotest_common.sh@819 -- # '[' -z 76919 ']' 00:38:02.896 08:34:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:02.896 08:34:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:38:02.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:02.896 08:34:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:02.896 08:34:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:38:02.896 08:34:36 -- common/autotest_common.sh@10 -- # set +x 00:38:03.153 [2024-04-17 08:34:36.238604] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:38:03.153 [2024-04-17 08:34:36.238679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76919 ] 00:38:03.153 [2024-04-17 08:34:36.363289] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:03.153 [2024-04-17 08:34:36.465843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:38:04.086 08:34:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:38:04.086 08:34:37 -- common/autotest_common.sh@852 -- # return 0 00:38:04.086 08:34:37 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:38:04.086 [2024-04-17 08:34:37.338974] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:04.086 [2024-04-17 08:34:37.339021] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:38:04.086 2024/04/17 08:34:37 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-22 Msg=Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:38:04.086 request: 00:38:04.086 { 00:38:04.086 "method": "bdev_nvme_attach_controller", 00:38:04.086 "params": { 00:38:04.086 "name": "TLSTEST", 00:38:04.086 "trtype": "tcp", 00:38:04.086 "traddr": "10.0.0.2", 00:38:04.086 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:04.086 "adrfam": "ipv4", 00:38:04.086 "trsvcid": "4420", 00:38:04.086 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:04.086 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:38:04.086 } 00:38:04.086 } 00:38:04.086 Got JSON-RPC error response 00:38:04.086 GoRPCClient: error on JSON-RPC call 00:38:04.086 08:34:37 -- target/tls.sh@36 -- # killprocess 76919 00:38:04.086 08:34:37 -- common/autotest_common.sh@926 -- # '[' -z 76919 ']' 00:38:04.086 08:34:37 -- common/autotest_common.sh@930 -- # kill -0 76919 00:38:04.086 08:34:37 -- common/autotest_common.sh@931 -- # uname 00:38:04.086 08:34:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:38:04.086 08:34:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76919 00:38:04.086 killing process with pid 76919 00:38:04.086 Received shutdown signal, test time was about 10.000000 seconds 00:38:04.086 00:38:04.086 Latency(us) 00:38:04.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:04.086 =================================================================================================================== 00:38:04.087 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:04.087 08:34:37 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:38:04.087 08:34:37 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:38:04.087 08:34:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76919' 00:38:04.087 08:34:37 -- common/autotest_common.sh@945 -- # kill 76919 00:38:04.087 08:34:37 -- common/autotest_common.sh@950 -- # wait 76919 00:38:04.344 08:34:37 -- target/tls.sh@37 -- # return 1 00:38:04.344 08:34:37 -- common/autotest_common.sh@643 -- # es=1 00:38:04.344 08:34:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:38:04.344 08:34:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:38:04.344 08:34:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:38:04.344 08:34:37 -- target/tls.sh@183 -- # killprocess 76669 00:38:04.344 08:34:37 -- common/autotest_common.sh@926 -- # '[' -z 76669 ']' 00:38:04.344 08:34:37 -- common/autotest_common.sh@930 -- # kill -0 76669 00:38:04.344 08:34:37 -- common/autotest_common.sh@931 -- # uname 00:38:04.344 08:34:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:38:04.345 08:34:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76669 00:38:04.345 08:34:37 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:38:04.345 08:34:37 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:38:04.345 killing process with pid 76669 00:38:04.345 08:34:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76669' 00:38:04.345 08:34:37 -- common/autotest_common.sh@945 -- # kill 76669 00:38:04.345 08:34:37 -- common/autotest_common.sh@950 -- # wait 76669 00:38:04.602 08:34:37 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:38:04.602 08:34:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:38:04.602 08:34:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:38:04.602 08:34:37 -- common/autotest_common.sh@10 -- # set +x 00:38:04.602 08:34:37 -- nvmf/common.sh@469 -- # nvmfpid=76975 00:38:04.602 08:34:37 -- nvmf/common.sh@470 -- # waitforlisten 76975 00:38:04.602 08:34:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:38:04.602 08:34:37 -- common/autotest_common.sh@819 -- # '[' -z 76975 ']' 00:38:04.602 08:34:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:04.602 08:34:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:38:04.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:04.602 08:34:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:04.602 08:34:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:38:04.602 08:34:37 -- common/autotest_common.sh@10 -- # set +x 00:38:04.860 [2024-04-17 08:34:37.933988] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:38:04.860 [2024-04-17 08:34:37.934067] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:04.860 [2024-04-17 08:34:38.060255] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:04.860 [2024-04-17 08:34:38.166885] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:38:04.860 [2024-04-17 08:34:38.167024] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:04.860 [2024-04-17 08:34:38.167032] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:04.860 [2024-04-17 08:34:38.167039] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:04.860 [2024-04-17 08:34:38.167062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:38:05.795 08:34:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:38:05.795 08:34:38 -- common/autotest_common.sh@852 -- # return 0 00:38:05.795 08:34:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:38:05.795 08:34:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:38:05.795 08:34:38 -- common/autotest_common.sh@10 -- # set +x 00:38:05.795 08:34:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:05.795 08:34:38 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:38:05.795 08:34:38 -- common/autotest_common.sh@640 -- # local es=0 00:38:05.795 08:34:38 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:38:05.795 08:34:38 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:38:05.795 08:34:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:38:05.795 08:34:38 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:38:05.795 08:34:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:38:05.795 08:34:38 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:38:05.795 08:34:38 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:38:05.795 08:34:38 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:38:05.795 [2024-04-17 08:34:39.126004] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:06.054 08:34:39 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:38:06.054 08:34:39 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:38:06.313 [2024-04-17 08:34:39.545434] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:06.313 [2024-04-17 08:34:39.545665] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:06.313 08:34:39 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:38:06.573 malloc0 00:38:06.573 08:34:39 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:38:06.831 08:34:40 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:38:07.089 [2024-04-17 08:34:40.198515] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:38:07.089 [2024-04-17 08:34:40.198561] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:38:07.089 [2024-04-17 08:34:40.198579] subsystem.c: 840:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:38:07.090 2024/04/17 08:34:40 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:38:07.090 request: 00:38:07.090 { 00:38:07.090 "method": "nvmf_subsystem_add_host", 00:38:07.090 "params": { 00:38:07.090 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:07.090 "host": "nqn.2016-06.io.spdk:host1", 00:38:07.090 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:38:07.090 } 00:38:07.090 } 00:38:07.090 Got JSON-RPC error response 00:38:07.090 GoRPCClient: error on JSON-RPC call 00:38:07.090 08:34:40 -- common/autotest_common.sh@643 -- # es=1 00:38:07.090 08:34:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:38:07.090 08:34:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:38:07.090 08:34:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:38:07.090 08:34:40 -- target/tls.sh@189 -- # killprocess 76975 00:38:07.090 08:34:40 -- common/autotest_common.sh@926 -- # '[' -z 76975 ']' 00:38:07.090 08:34:40 -- common/autotest_common.sh@930 -- # kill -0 76975 00:38:07.090 08:34:40 -- common/autotest_common.sh@931 -- # uname 00:38:07.090 08:34:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:38:07.090 08:34:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76975 00:38:07.090 08:34:40 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:38:07.090 08:34:40 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:38:07.090 killing process with pid 76975 00:38:07.090 08:34:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76975' 00:38:07.090 08:34:40 -- common/autotest_common.sh@945 -- # kill 76975 00:38:07.090 08:34:40 -- common/autotest_common.sh@950 -- # wait 76975 00:38:07.348 08:34:40 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:38:07.348 08:34:40 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:38:07.348 08:34:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:38:07.348 08:34:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:38:07.348 08:34:40 -- common/autotest_common.sh@10 -- # set +x 00:38:07.348 08:34:40 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:38:07.348 08:34:40 -- nvmf/common.sh@469 -- # nvmfpid=77081 00:38:07.348 08:34:40 -- nvmf/common.sh@470 -- # waitforlisten 77081 00:38:07.348 08:34:40 -- common/autotest_common.sh@819 -- # '[' -z 77081 ']' 00:38:07.348 08:34:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:07.348 08:34:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:38:07.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:07.348 08:34:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:07.348 08:34:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:38:07.348 08:34:40 -- common/autotest_common.sh@10 -- # set +x 00:38:07.348 [2024-04-17 08:34:40.565237] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:38:07.349 [2024-04-17 08:34:40.565319] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:07.608 [2024-04-17 08:34:40.706951] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:07.608 [2024-04-17 08:34:40.816462] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:38:07.608 [2024-04-17 08:34:40.816625] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:07.608 [2024-04-17 08:34:40.816639] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:07.608 [2024-04-17 08:34:40.816650] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:07.608 [2024-04-17 08:34:40.816684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:38:08.177 08:34:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:38:08.177 08:34:41 -- common/autotest_common.sh@852 -- # return 0 00:38:08.177 08:34:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:38:08.177 08:34:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:38:08.177 08:34:41 -- common/autotest_common.sh@10 -- # set +x 00:38:08.177 08:34:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:08.177 08:34:41 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:38:08.177 08:34:41 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:38:08.177 08:34:41 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:38:08.436 [2024-04-17 08:34:41.696474] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:08.436 08:34:41 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:38:08.696 08:34:42 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:38:08.955 [2024-04-17 08:34:42.203601] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:08.955 [2024-04-17 08:34:42.203966] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:08.955 08:34:42 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:38:09.219 malloc0 00:38:09.220 08:34:42 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:38:09.480 08:34:42 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:38:09.739 08:34:42 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:38:09.739 08:34:42 -- target/tls.sh@197 -- # bdevperf_pid=77181 00:38:09.739 08:34:42 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:38:09.739 08:34:42 -- target/tls.sh@200 -- # waitforlisten 77181 /var/tmp/bdevperf.sock 00:38:09.739 08:34:42 -- common/autotest_common.sh@819 -- # '[' -z 77181 ']' 00:38:09.739 08:34:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:09.739 08:34:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:38:09.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:09.739 08:34:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:09.739 08:34:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:38:09.739 08:34:42 -- common/autotest_common.sh@10 -- # set +x 00:38:09.739 [2024-04-17 08:34:42.895421] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:38:09.739 [2024-04-17 08:34:42.895970] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77181 ] 00:38:09.739 [2024-04-17 08:34:43.034020] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:10.021 [2024-04-17 08:34:43.138766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:38:10.622 08:34:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:38:10.622 08:34:43 -- common/autotest_common.sh@852 -- # return 0 00:38:10.622 08:34:43 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:38:10.883 [2024-04-17 08:34:44.049637] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:10.883 TLSTESTn1 00:38:10.883 08:34:44 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:38:11.452 08:34:44 -- target/tls.sh@205 -- # tgtconf='{ 00:38:11.452 "subsystems": [ 00:38:11.452 { 00:38:11.452 "subsystem": "iobuf", 00:38:11.452 "config": [ 00:38:11.452 { 00:38:11.452 "method": "iobuf_set_options", 00:38:11.452 "params": { 00:38:11.452 "large_bufsize": 135168, 00:38:11.452 "large_pool_count": 1024, 00:38:11.452 "small_bufsize": 8192, 00:38:11.452 "small_pool_count": 8192 00:38:11.452 } 00:38:11.452 } 00:38:11.452 ] 00:38:11.452 }, 00:38:11.452 { 00:38:11.452 "subsystem": "sock", 00:38:11.452 "config": [ 00:38:11.452 { 00:38:11.452 "method": "sock_impl_set_options", 00:38:11.452 "params": { 00:38:11.452 "enable_ktls": false, 00:38:11.452 "enable_placement_id": 0, 00:38:11.452 "enable_quickack": false, 00:38:11.452 "enable_recv_pipe": true, 00:38:11.452 "enable_zerocopy_send_client": false, 00:38:11.452 "enable_zerocopy_send_server": true, 00:38:11.452 "impl_name": "posix", 00:38:11.452 "recv_buf_size": 2097152, 00:38:11.452 "send_buf_size": 2097152, 00:38:11.452 "tls_version": 0, 00:38:11.452 "zerocopy_threshold": 0 00:38:11.452 } 00:38:11.452 }, 00:38:11.452 { 00:38:11.452 "method": "sock_impl_set_options", 00:38:11.452 "params": { 00:38:11.452 "enable_ktls": false, 00:38:11.452 "enable_placement_id": 0, 00:38:11.452 "enable_quickack": false, 00:38:11.452 "enable_recv_pipe": true, 00:38:11.452 "enable_zerocopy_send_client": false, 00:38:11.452 "enable_zerocopy_send_server": true, 00:38:11.452 "impl_name": "ssl", 00:38:11.452 "recv_buf_size": 4096, 00:38:11.452 "send_buf_size": 4096, 00:38:11.452 "tls_version": 0, 00:38:11.452 "zerocopy_threshold": 0 00:38:11.452 } 00:38:11.452 } 00:38:11.452 ] 00:38:11.452 }, 00:38:11.452 { 00:38:11.452 "subsystem": "vmd", 00:38:11.452 "config": [] 00:38:11.452 }, 00:38:11.452 { 00:38:11.452 "subsystem": "accel", 00:38:11.452 "config": [ 00:38:11.452 { 00:38:11.452 "method": "accel_set_options", 00:38:11.452 "params": { 00:38:11.452 "buf_count": 2048, 00:38:11.452 "large_cache_size": 16, 00:38:11.452 "sequence_count": 2048, 00:38:11.452 "small_cache_size": 128, 00:38:11.452 "task_count": 2048 00:38:11.452 } 00:38:11.452 } 00:38:11.452 ] 00:38:11.452 }, 00:38:11.452 { 00:38:11.452 "subsystem": "bdev", 00:38:11.452 "config": [ 00:38:11.452 { 00:38:11.452 "method": "bdev_set_options", 00:38:11.452 "params": { 00:38:11.452 "bdev_auto_examine": true, 00:38:11.452 "bdev_io_cache_size": 256, 00:38:11.452 "bdev_io_pool_size": 65535, 00:38:11.452 "iobuf_large_cache_size": 16, 00:38:11.452 "iobuf_small_cache_size": 128 00:38:11.452 } 00:38:11.452 }, 00:38:11.452 { 00:38:11.452 "method": "bdev_raid_set_options", 00:38:11.452 "params": { 00:38:11.452 "process_window_size_kb": 1024 00:38:11.452 } 00:38:11.452 }, 00:38:11.452 { 00:38:11.452 "method": "bdev_iscsi_set_options", 00:38:11.452 "params": { 00:38:11.452 "timeout_sec": 30 00:38:11.452 } 00:38:11.452 }, 00:38:11.452 { 00:38:11.452 "method": "bdev_nvme_set_options", 00:38:11.452 "params": { 00:38:11.452 "action_on_timeout": "none", 00:38:11.452 "allow_accel_sequence": false, 00:38:11.452 "arbitration_burst": 0, 00:38:11.452 "bdev_retry_count": 3, 00:38:11.452 "ctrlr_loss_timeout_sec": 0, 00:38:11.452 "delay_cmd_submit": true, 00:38:11.452 "fast_io_fail_timeout_sec": 0, 00:38:11.452 "generate_uuids": false, 00:38:11.452 "high_priority_weight": 0, 00:38:11.452 "io_path_stat": false, 00:38:11.452 "io_queue_requests": 0, 00:38:11.452 "keep_alive_timeout_ms": 10000, 00:38:11.452 "low_priority_weight": 0, 00:38:11.452 "medium_priority_weight": 0, 00:38:11.452 "nvme_adminq_poll_period_us": 10000, 00:38:11.452 "nvme_ioq_poll_period_us": 0, 00:38:11.452 "reconnect_delay_sec": 0, 00:38:11.452 "timeout_admin_us": 0, 00:38:11.452 "timeout_us": 0, 00:38:11.452 "transport_ack_timeout": 0, 00:38:11.452 "transport_retry_count": 4, 00:38:11.452 "transport_tos": 0 00:38:11.452 } 00:38:11.452 }, 00:38:11.452 { 00:38:11.452 "method": "bdev_nvme_set_hotplug", 00:38:11.452 "params": { 00:38:11.452 "enable": false, 00:38:11.452 "period_us": 100000 00:38:11.452 } 00:38:11.452 }, 00:38:11.452 { 00:38:11.452 "method": "bdev_malloc_create", 00:38:11.452 "params": { 00:38:11.452 "block_size": 4096, 00:38:11.452 "name": "malloc0", 00:38:11.452 "num_blocks": 8192, 00:38:11.452 "optimal_io_boundary": 0, 00:38:11.452 "physical_block_size": 4096, 00:38:11.452 "uuid": "0d92c40d-6e9d-4a35-ae2d-0f971e5f90d3" 00:38:11.452 } 00:38:11.452 }, 00:38:11.452 { 00:38:11.452 "method": "bdev_wait_for_examine" 00:38:11.452 } 00:38:11.452 ] 00:38:11.452 }, 00:38:11.452 { 00:38:11.452 "subsystem": "nbd", 00:38:11.452 "config": [] 00:38:11.452 }, 00:38:11.452 { 00:38:11.452 "subsystem": "scheduler", 00:38:11.452 "config": [ 00:38:11.452 { 00:38:11.452 "method": "framework_set_scheduler", 00:38:11.452 "params": { 00:38:11.452 "name": "static" 00:38:11.452 } 00:38:11.452 } 00:38:11.452 ] 00:38:11.452 }, 00:38:11.452 { 00:38:11.453 "subsystem": "nvmf", 00:38:11.453 "config": [ 00:38:11.453 { 00:38:11.453 "method": "nvmf_set_config", 00:38:11.453 "params": { 00:38:11.453 "admin_cmd_passthru": { 00:38:11.453 "identify_ctrlr": false 00:38:11.453 }, 00:38:11.453 "discovery_filter": "match_any" 00:38:11.453 } 00:38:11.453 }, 00:38:11.453 { 00:38:11.453 "method": "nvmf_set_max_subsystems", 00:38:11.453 "params": { 00:38:11.453 "max_subsystems": 1024 00:38:11.453 } 00:38:11.453 }, 00:38:11.453 { 00:38:11.453 "method": "nvmf_set_crdt", 00:38:11.453 "params": { 00:38:11.453 "crdt1": 0, 00:38:11.453 "crdt2": 0, 00:38:11.453 "crdt3": 0 00:38:11.453 } 00:38:11.453 }, 00:38:11.453 { 00:38:11.453 "method": "nvmf_create_transport", 00:38:11.453 "params": { 00:38:11.453 "abort_timeout_sec": 1, 00:38:11.453 "buf_cache_size": 4294967295, 00:38:11.453 "c2h_success": false, 00:38:11.453 "dif_insert_or_strip": false, 00:38:11.453 "in_capsule_data_size": 4096, 00:38:11.453 "io_unit_size": 131072, 00:38:11.453 "max_aq_depth": 128, 00:38:11.453 "max_io_qpairs_per_ctrlr": 127, 00:38:11.453 "max_io_size": 131072, 00:38:11.453 "max_queue_depth": 128, 00:38:11.453 "num_shared_buffers": 511, 00:38:11.453 "sock_priority": 0, 00:38:11.453 "trtype": "TCP", 00:38:11.453 "zcopy": false 00:38:11.453 } 00:38:11.453 }, 00:38:11.453 { 00:38:11.453 "method": "nvmf_create_subsystem", 00:38:11.453 "params": { 00:38:11.453 "allow_any_host": false, 00:38:11.453 "ana_reporting": false, 00:38:11.453 "max_cntlid": 65519, 00:38:11.453 "max_namespaces": 10, 00:38:11.453 "min_cntlid": 1, 00:38:11.453 "model_number": "SPDK bdev Controller", 00:38:11.453 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:11.453 "serial_number": "SPDK00000000000001" 00:38:11.453 } 00:38:11.453 }, 00:38:11.453 { 00:38:11.453 "method": "nvmf_subsystem_add_host", 00:38:11.453 "params": { 00:38:11.453 "host": "nqn.2016-06.io.spdk:host1", 00:38:11.453 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:11.453 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:38:11.453 } 00:38:11.453 }, 00:38:11.453 { 00:38:11.453 "method": "nvmf_subsystem_add_ns", 00:38:11.453 "params": { 00:38:11.453 "namespace": { 00:38:11.453 "bdev_name": "malloc0", 00:38:11.453 "nguid": "0D92C40D6E9D4A35AE2D0F971E5F90D3", 00:38:11.453 "nsid": 1, 00:38:11.453 "uuid": "0d92c40d-6e9d-4a35-ae2d-0f971e5f90d3" 00:38:11.453 }, 00:38:11.453 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:38:11.453 } 00:38:11.453 }, 00:38:11.453 { 00:38:11.453 "method": "nvmf_subsystem_add_listener", 00:38:11.453 "params": { 00:38:11.453 "listen_address": { 00:38:11.453 "adrfam": "IPv4", 00:38:11.453 "traddr": "10.0.0.2", 00:38:11.453 "trsvcid": "4420", 00:38:11.453 "trtype": "TCP" 00:38:11.453 }, 00:38:11.453 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:11.453 "secure_channel": true 00:38:11.453 } 00:38:11.453 } 00:38:11.453 ] 00:38:11.453 } 00:38:11.453 ] 00:38:11.453 }' 00:38:11.453 08:34:44 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:38:11.453 08:34:44 -- target/tls.sh@206 -- # bdevperfconf='{ 00:38:11.453 "subsystems": [ 00:38:11.453 { 00:38:11.453 "subsystem": "iobuf", 00:38:11.453 "config": [ 00:38:11.453 { 00:38:11.453 "method": "iobuf_set_options", 00:38:11.453 "params": { 00:38:11.453 "large_bufsize": 135168, 00:38:11.453 "large_pool_count": 1024, 00:38:11.453 "small_bufsize": 8192, 00:38:11.453 "small_pool_count": 8192 00:38:11.453 } 00:38:11.453 } 00:38:11.453 ] 00:38:11.453 }, 00:38:11.453 { 00:38:11.453 "subsystem": "sock", 00:38:11.453 "config": [ 00:38:11.453 { 00:38:11.453 "method": "sock_impl_set_options", 00:38:11.453 "params": { 00:38:11.453 "enable_ktls": false, 00:38:11.453 "enable_placement_id": 0, 00:38:11.453 "enable_quickack": false, 00:38:11.453 "enable_recv_pipe": true, 00:38:11.453 "enable_zerocopy_send_client": false, 00:38:11.453 "enable_zerocopy_send_server": true, 00:38:11.453 "impl_name": "posix", 00:38:11.453 "recv_buf_size": 2097152, 00:38:11.453 "send_buf_size": 2097152, 00:38:11.453 "tls_version": 0, 00:38:11.453 "zerocopy_threshold": 0 00:38:11.453 } 00:38:11.453 }, 00:38:11.453 { 00:38:11.453 "method": "sock_impl_set_options", 00:38:11.453 "params": { 00:38:11.453 "enable_ktls": false, 00:38:11.453 "enable_placement_id": 0, 00:38:11.453 "enable_quickack": false, 00:38:11.453 "enable_recv_pipe": true, 00:38:11.453 "enable_zerocopy_send_client": false, 00:38:11.453 "enable_zerocopy_send_server": true, 00:38:11.453 "impl_name": "ssl", 00:38:11.453 "recv_buf_size": 4096, 00:38:11.453 "send_buf_size": 4096, 00:38:11.453 "tls_version": 0, 00:38:11.453 "zerocopy_threshold": 0 00:38:11.453 } 00:38:11.453 } 00:38:11.453 ] 00:38:11.453 }, 00:38:11.453 { 00:38:11.453 "subsystem": "vmd", 00:38:11.453 "config": [] 00:38:11.453 }, 00:38:11.453 { 00:38:11.453 "subsystem": "accel", 00:38:11.453 "config": [ 00:38:11.453 { 00:38:11.453 "method": "accel_set_options", 00:38:11.454 "params": { 00:38:11.454 "buf_count": 2048, 00:38:11.454 "large_cache_size": 16, 00:38:11.454 "sequence_count": 2048, 00:38:11.454 "small_cache_size": 128, 00:38:11.454 "task_count": 2048 00:38:11.454 } 00:38:11.454 } 00:38:11.454 ] 00:38:11.454 }, 00:38:11.454 { 00:38:11.454 "subsystem": "bdev", 00:38:11.454 "config": [ 00:38:11.454 { 00:38:11.454 "method": "bdev_set_options", 00:38:11.454 "params": { 00:38:11.454 "bdev_auto_examine": true, 00:38:11.454 "bdev_io_cache_size": 256, 00:38:11.454 "bdev_io_pool_size": 65535, 00:38:11.454 "iobuf_large_cache_size": 16, 00:38:11.454 "iobuf_small_cache_size": 128 00:38:11.454 } 00:38:11.454 }, 00:38:11.454 { 00:38:11.454 "method": "bdev_raid_set_options", 00:38:11.454 "params": { 00:38:11.454 "process_window_size_kb": 1024 00:38:11.454 } 00:38:11.454 }, 00:38:11.454 { 00:38:11.454 "method": "bdev_iscsi_set_options", 00:38:11.454 "params": { 00:38:11.454 "timeout_sec": 30 00:38:11.454 } 00:38:11.454 }, 00:38:11.454 { 00:38:11.454 "method": "bdev_nvme_set_options", 00:38:11.454 "params": { 00:38:11.454 "action_on_timeout": "none", 00:38:11.454 "allow_accel_sequence": false, 00:38:11.454 "arbitration_burst": 0, 00:38:11.454 "bdev_retry_count": 3, 00:38:11.454 "ctrlr_loss_timeout_sec": 0, 00:38:11.454 "delay_cmd_submit": true, 00:38:11.454 "fast_io_fail_timeout_sec": 0, 00:38:11.454 "generate_uuids": false, 00:38:11.454 "high_priority_weight": 0, 00:38:11.454 "io_path_stat": false, 00:38:11.454 "io_queue_requests": 512, 00:38:11.454 "keep_alive_timeout_ms": 10000, 00:38:11.454 "low_priority_weight": 0, 00:38:11.454 "medium_priority_weight": 0, 00:38:11.454 "nvme_adminq_poll_period_us": 10000, 00:38:11.454 "nvme_ioq_poll_period_us": 0, 00:38:11.454 "reconnect_delay_sec": 0, 00:38:11.454 "timeout_admin_us": 0, 00:38:11.454 "timeout_us": 0, 00:38:11.454 "transport_ack_timeout": 0, 00:38:11.454 "transport_retry_count": 4, 00:38:11.454 "transport_tos": 0 00:38:11.454 } 00:38:11.454 }, 00:38:11.454 { 00:38:11.454 "method": "bdev_nvme_attach_controller", 00:38:11.454 "params": { 00:38:11.454 "adrfam": "IPv4", 00:38:11.454 "ctrlr_loss_timeout_sec": 0, 00:38:11.454 "ddgst": false, 00:38:11.454 "fast_io_fail_timeout_sec": 0, 00:38:11.454 "hdgst": false, 00:38:11.454 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:11.454 "name": "TLSTEST", 00:38:11.454 "prchk_guard": false, 00:38:11.454 "prchk_reftag": false, 00:38:11.454 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:38:11.454 "reconnect_delay_sec": 0, 00:38:11.454 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:11.454 "traddr": "10.0.0.2", 00:38:11.454 "trsvcid": "4420", 00:38:11.454 "trtype": "TCP" 00:38:11.454 } 00:38:11.454 }, 00:38:11.454 { 00:38:11.454 "method": "bdev_nvme_set_hotplug", 00:38:11.454 "params": { 00:38:11.454 "enable": false, 00:38:11.454 "period_us": 100000 00:38:11.454 } 00:38:11.454 }, 00:38:11.454 { 00:38:11.454 "method": "bdev_wait_for_examine" 00:38:11.454 } 00:38:11.454 ] 00:38:11.454 }, 00:38:11.454 { 00:38:11.454 "subsystem": "nbd", 00:38:11.454 "config": [] 00:38:11.454 } 00:38:11.454 ] 00:38:11.454 }' 00:38:11.454 08:34:44 -- target/tls.sh@208 -- # killprocess 77181 00:38:11.454 08:34:44 -- common/autotest_common.sh@926 -- # '[' -z 77181 ']' 00:38:11.454 08:34:44 -- common/autotest_common.sh@930 -- # kill -0 77181 00:38:11.454 08:34:44 -- common/autotest_common.sh@931 -- # uname 00:38:11.454 08:34:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:38:11.454 08:34:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77181 00:38:11.714 08:34:44 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:38:11.714 08:34:44 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:38:11.714 killing process with pid 77181 00:38:11.714 08:34:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77181' 00:38:11.714 08:34:44 -- common/autotest_common.sh@945 -- # kill 77181 00:38:11.714 Received shutdown signal, test time was about 10.000000 seconds 00:38:11.714 00:38:11.714 Latency(us) 00:38:11.714 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:11.714 =================================================================================================================== 00:38:11.714 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:11.714 08:34:44 -- common/autotest_common.sh@950 -- # wait 77181 00:38:11.714 08:34:45 -- target/tls.sh@209 -- # killprocess 77081 00:38:11.714 08:34:45 -- common/autotest_common.sh@926 -- # '[' -z 77081 ']' 00:38:11.714 08:34:45 -- common/autotest_common.sh@930 -- # kill -0 77081 00:38:11.714 08:34:45 -- common/autotest_common.sh@931 -- # uname 00:38:11.714 08:34:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:38:11.714 08:34:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77081 00:38:11.975 08:34:45 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:38:11.975 08:34:45 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:38:11.975 killing process with pid 77081 00:38:11.975 08:34:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77081' 00:38:11.975 08:34:45 -- common/autotest_common.sh@945 -- # kill 77081 00:38:11.975 08:34:45 -- common/autotest_common.sh@950 -- # wait 77081 00:38:11.975 08:34:45 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:38:11.975 08:34:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:38:11.975 08:34:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:38:11.975 08:34:45 -- target/tls.sh@212 -- # echo '{ 00:38:11.975 "subsystems": [ 00:38:11.975 { 00:38:11.975 "subsystem": "iobuf", 00:38:11.975 "config": [ 00:38:11.975 { 00:38:11.975 "method": "iobuf_set_options", 00:38:11.975 "params": { 00:38:11.975 "large_bufsize": 135168, 00:38:11.975 "large_pool_count": 1024, 00:38:11.975 "small_bufsize": 8192, 00:38:11.975 "small_pool_count": 8192 00:38:11.975 } 00:38:11.975 } 00:38:11.975 ] 00:38:11.975 }, 00:38:11.975 { 00:38:11.975 "subsystem": "sock", 00:38:11.975 "config": [ 00:38:11.975 { 00:38:11.975 "method": "sock_impl_set_options", 00:38:11.975 "params": { 00:38:11.975 "enable_ktls": false, 00:38:11.975 "enable_placement_id": 0, 00:38:11.975 "enable_quickack": false, 00:38:11.975 "enable_recv_pipe": true, 00:38:11.975 "enable_zerocopy_send_client": false, 00:38:11.975 "enable_zerocopy_send_server": true, 00:38:11.975 "impl_name": "posix", 00:38:11.975 "recv_buf_size": 2097152, 00:38:11.975 "send_buf_size": 2097152, 00:38:11.975 "tls_version": 0, 00:38:11.975 "zerocopy_threshold": 0 00:38:11.975 } 00:38:11.975 }, 00:38:11.975 { 00:38:11.975 "method": "sock_impl_set_options", 00:38:11.975 "params": { 00:38:11.975 "enable_ktls": false, 00:38:11.975 "enable_placement_id": 0, 00:38:11.975 "enable_quickack": false, 00:38:11.975 "enable_recv_pipe": true, 00:38:11.975 "enable_zerocopy_send_client": false, 00:38:11.975 "enable_zerocopy_send_server": true, 00:38:11.975 "impl_name": "ssl", 00:38:11.975 "recv_buf_size": 4096, 00:38:11.975 "send_buf_size": 4096, 00:38:11.975 "tls_version": 0, 00:38:11.975 "zerocopy_threshold": 0 00:38:11.975 } 00:38:11.975 } 00:38:11.975 ] 00:38:11.975 }, 00:38:11.975 { 00:38:11.975 "subsystem": "vmd", 00:38:11.975 "config": [] 00:38:11.975 }, 00:38:11.975 { 00:38:11.975 "subsystem": "accel", 00:38:11.975 "config": [ 00:38:11.975 { 00:38:11.975 "method": "accel_set_options", 00:38:11.975 "params": { 00:38:11.975 "buf_count": 2048, 00:38:11.975 "large_cache_size": 16, 00:38:11.975 "sequence_count": 2048, 00:38:11.975 "small_cache_size": 128, 00:38:11.975 "task_count": 2048 00:38:11.975 } 00:38:11.975 } 00:38:11.975 ] 00:38:11.975 }, 00:38:11.975 { 00:38:11.975 "subsystem": "bdev", 00:38:11.975 "config": [ 00:38:11.975 { 00:38:11.975 "method": "bdev_set_options", 00:38:11.975 "params": { 00:38:11.975 "bdev_auto_examine": true, 00:38:11.975 "bdev_io_cache_size": 256, 00:38:11.975 "bdev_io_pool_size": 65535, 00:38:11.975 "iobuf_large_cache_size": 16, 00:38:11.975 "iobuf_small_cache_size": 128 00:38:11.975 } 00:38:11.975 }, 00:38:11.975 { 00:38:11.975 "method": "bdev_raid_set_options", 00:38:11.975 "params": { 00:38:11.975 "process_window_size_kb": 1024 00:38:11.975 } 00:38:11.975 }, 00:38:11.975 { 00:38:11.975 "method": "bdev_iscsi_set_options", 00:38:11.975 "params": { 00:38:11.975 "timeout_sec": 30 00:38:11.975 } 00:38:11.975 }, 00:38:11.976 { 00:38:11.976 "method": "bdev_nvme_set_options", 00:38:11.976 "params": { 00:38:11.976 "action_on_timeout": "none", 00:38:11.976 "allow_accel_sequence": false, 00:38:11.976 "arbitration_burst": 0, 00:38:11.976 "bdev_retry_count": 3, 00:38:11.976 "ctrlr_loss_timeout_sec": 0, 00:38:11.976 "delay_cmd_submit": true, 00:38:11.976 "fast_io_fail_timeout_sec": 0, 00:38:11.976 "generate_uuids": false, 00:38:11.976 "high_priority_weight": 0, 00:38:11.976 "io_path_stat": false, 00:38:11.976 "io_queue_requests": 0, 00:38:11.976 "keep_alive_timeout_ms": 10000, 00:38:11.976 "low_priority_weight": 0, 00:38:11.976 "medium_priority_weight": 0, 00:38:11.976 "nvme_adminq_poll_period_us": 10000, 00:38:11.976 "nvme_ioq_poll_period_us": 0, 00:38:11.976 "reconnect_delay_sec": 0, 00:38:11.976 "timeout_admin_us": 0, 00:38:11.976 "timeout_us": 0, 00:38:11.976 "transport_ack_timeout": 0, 00:38:11.976 "transport_retry_count": 4, 00:38:11.976 "transport_tos": 0 00:38:11.976 } 00:38:11.976 }, 00:38:11.976 { 00:38:11.976 "method": "bdev_nvme_set_hotplug", 00:38:11.976 "params": { 00:38:11.976 "enable": false, 00:38:11.976 "period_us": 100000 00:38:11.976 } 00:38:11.976 }, 00:38:11.976 { 00:38:11.976 "method": "bdev_malloc_create", 00:38:11.976 "params": { 00:38:11.976 "block_size": 4096, 00:38:11.976 "name": "malloc0", 00:38:11.976 "num_blocks": 8192, 00:38:11.976 "optimal_io_boundary": 0, 00:38:11.976 "physical_block_size": 4096, 00:38:11.976 "uuid": "0d92c40d-6e9d-4a35-ae2d-0f971e5f90d3" 00:38:11.976 } 00:38:11.976 }, 00:38:11.976 { 00:38:11.976 "method": "bdev_wait_for_examine" 00:38:11.976 } 00:38:11.976 ] 00:38:11.976 }, 00:38:11.976 { 00:38:11.976 "subsystem": "nbd", 00:38:11.976 "config": [] 00:38:11.976 }, 00:38:11.976 { 00:38:11.976 "subsystem": "scheduler", 00:38:11.976 "config": [ 00:38:11.976 { 00:38:11.976 "method": "framework_set_scheduler", 00:38:11.976 "params": { 00:38:11.976 "name": "static" 00:38:11.976 } 00:38:11.976 } 00:38:11.976 ] 00:38:11.976 }, 00:38:11.976 { 00:38:11.976 "subsystem": "nvmf", 00:38:11.976 "config": [ 00:38:11.976 { 00:38:11.976 "method": "nvmf_set_config", 00:38:11.976 "params": { 00:38:11.976 "admin_cmd_passthru": { 00:38:11.976 "identify_ctrlr": false 00:38:11.976 }, 00:38:11.976 "discovery_filter": "match_any" 00:38:11.976 } 00:38:11.976 }, 00:38:11.976 { 00:38:11.976 "method": "nvmf_set_max_subsystems", 00:38:11.976 "params": { 00:38:11.976 "max_subsystems": 1024 00:38:11.976 } 00:38:11.976 }, 00:38:11.976 { 00:38:11.976 "method": "nvmf_set_crdt", 00:38:11.976 "params": { 00:38:11.976 "crdt1": 0, 00:38:11.976 "crdt2": 0, 00:38:11.976 "crdt3": 0 00:38:11.976 } 00:38:11.976 }, 00:38:11.976 { 00:38:11.976 "method": "nvmf_create_transport", 00:38:11.976 "params": { 00:38:11.976 "abort_timeout_sec": 1, 00:38:11.976 "buf_cache_size": 4294967295, 00:38:11.976 "c2h_success": false, 00:38:11.976 "dif_insert_or_strip": false, 00:38:11.976 "in_capsule_data_size": 4096, 00:38:11.976 "io_unit_size": 131072, 00:38:11.976 "max_aq_depth": 128, 00:38:11.976 "max_io_qpairs_per_ctrlr": 127, 00:38:11.976 "max_io_size": 131072, 00:38:11.976 "max_queue_depth": 128, 00:38:11.976 "num_shared_buffers": 511, 00:38:11.976 "sock_priority": 0, 00:38:11.976 "trtype": "TCP", 00:38:11.976 "zcopy": false 00:38:11.976 } 00:38:11.976 }, 00:38:11.976 { 00:38:11.976 "method": "nvmf_create_subsystem", 00:38:11.976 "params": { 00:38:11.976 "allow_any_host": false, 00:38:11.976 "ana_reporting": false, 00:38:11.976 "max_cntlid": 65519, 00:38:11.976 "max_namespaces": 10, 00:38:11.976 "min_cntlid": 1, 00:38:11.976 "model_number": "SPDK bdev Controller", 00:38:11.976 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:11.976 "serial_number": "SPDK00000000000001" 00:38:11.976 } 00:38:11.976 }, 00:38:11.976 { 00:38:11.976 "method": "nvmf_subsystem_add_host", 00:38:11.976 "params": { 00:38:11.976 "host": "nqn.2016-06.io.spdk:host1", 00:38:11.976 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:11.976 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:38:11.976 } 00:38:11.976 }, 00:38:11.976 { 00:38:11.976 "method": "nvmf_subsystem_add_ns", 00:38:11.976 "params": { 00:38:11.976 "namespace": { 00:38:11.976 "bdev_name": "malloc0", 00:38:11.976 "nguid": "0D92C40D6E9D4A35AE2D0F971E5F90D3", 00:38:11.976 "nsid": 1, 00:38:11.976 "uuid": "0d92c40d-6e9d-4a35-ae2d-0f971e5f90d3" 00:38:11.976 }, 00:38:11.976 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:38:11.976 } 00:38:11.976 }, 00:38:11.976 { 00:38:11.976 "method": "nvmf_subsystem_add_listener", 00:38:11.976 "params": { 00:38:11.976 "listen_address": { 00:38:11.976 "adrfam": "IPv4", 00:38:11.976 "traddr": "10.0.0.2", 00:38:11.976 "trsvcid": "4420", 00:38:11.976 "trtype": "TCP" 00:38:11.976 }, 00:38:11.976 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:11.976 "secure_channel": true 00:38:11.976 } 00:38:11.976 } 00:38:11.976 ] 00:38:11.976 } 00:38:11.976 ] 00:38:11.976 }' 00:38:11.976 08:34:45 -- common/autotest_common.sh@10 -- # set +x 00:38:12.236 08:34:45 -- nvmf/common.sh@469 -- # nvmfpid=77260 00:38:12.236 08:34:45 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:38:12.236 08:34:45 -- nvmf/common.sh@470 -- # waitforlisten 77260 00:38:12.236 08:34:45 -- common/autotest_common.sh@819 -- # '[' -z 77260 ']' 00:38:12.236 08:34:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:12.236 08:34:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:38:12.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:12.236 08:34:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:12.236 08:34:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:38:12.236 08:34:45 -- common/autotest_common.sh@10 -- # set +x 00:38:12.236 [2024-04-17 08:34:45.363413] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:38:12.237 [2024-04-17 08:34:45.363493] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:12.237 [2024-04-17 08:34:45.504286] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:12.496 [2024-04-17 08:34:45.609709] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:38:12.496 [2024-04-17 08:34:45.609842] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:12.496 [2024-04-17 08:34:45.609850] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:12.496 [2024-04-17 08:34:45.609856] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:12.496 [2024-04-17 08:34:45.609883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:38:12.496 [2024-04-17 08:34:45.811575] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:12.756 [2024-04-17 08:34:45.843494] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:12.756 [2024-04-17 08:34:45.843689] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:13.016 08:34:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:38:13.016 08:34:46 -- common/autotest_common.sh@852 -- # return 0 00:38:13.016 08:34:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:38:13.016 08:34:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:38:13.016 08:34:46 -- common/autotest_common.sh@10 -- # set +x 00:38:13.016 08:34:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:13.016 08:34:46 -- target/tls.sh@216 -- # bdevperf_pid=77304 00:38:13.016 08:34:46 -- target/tls.sh@217 -- # waitforlisten 77304 /var/tmp/bdevperf.sock 00:38:13.016 08:34:46 -- common/autotest_common.sh@819 -- # '[' -z 77304 ']' 00:38:13.016 08:34:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:13.016 08:34:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:38:13.016 08:34:46 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:38:13.016 08:34:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:13.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:13.016 08:34:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:38:13.016 08:34:46 -- common/autotest_common.sh@10 -- # set +x 00:38:13.016 08:34:46 -- target/tls.sh@213 -- # echo '{ 00:38:13.016 "subsystems": [ 00:38:13.016 { 00:38:13.016 "subsystem": "iobuf", 00:38:13.016 "config": [ 00:38:13.016 { 00:38:13.016 "method": "iobuf_set_options", 00:38:13.016 "params": { 00:38:13.016 "large_bufsize": 135168, 00:38:13.016 "large_pool_count": 1024, 00:38:13.016 "small_bufsize": 8192, 00:38:13.016 "small_pool_count": 8192 00:38:13.016 } 00:38:13.016 } 00:38:13.016 ] 00:38:13.016 }, 00:38:13.016 { 00:38:13.016 "subsystem": "sock", 00:38:13.016 "config": [ 00:38:13.016 { 00:38:13.016 "method": "sock_impl_set_options", 00:38:13.016 "params": { 00:38:13.016 "enable_ktls": false, 00:38:13.016 "enable_placement_id": 0, 00:38:13.016 "enable_quickack": false, 00:38:13.016 "enable_recv_pipe": true, 00:38:13.016 "enable_zerocopy_send_client": false, 00:38:13.016 "enable_zerocopy_send_server": true, 00:38:13.016 "impl_name": "posix", 00:38:13.016 "recv_buf_size": 2097152, 00:38:13.016 "send_buf_size": 2097152, 00:38:13.016 "tls_version": 0, 00:38:13.016 "zerocopy_threshold": 0 00:38:13.016 } 00:38:13.016 }, 00:38:13.016 { 00:38:13.017 "method": "sock_impl_set_options", 00:38:13.017 "params": { 00:38:13.017 "enable_ktls": false, 00:38:13.017 "enable_placement_id": 0, 00:38:13.017 "enable_quickack": false, 00:38:13.017 "enable_recv_pipe": true, 00:38:13.017 "enable_zerocopy_send_client": false, 00:38:13.017 "enable_zerocopy_send_server": true, 00:38:13.017 "impl_name": "ssl", 00:38:13.017 "recv_buf_size": 4096, 00:38:13.017 "send_buf_size": 4096, 00:38:13.017 "tls_version": 0, 00:38:13.017 "zerocopy_threshold": 0 00:38:13.017 } 00:38:13.017 } 00:38:13.017 ] 00:38:13.017 }, 00:38:13.017 { 00:38:13.017 "subsystem": "vmd", 00:38:13.017 "config": [] 00:38:13.017 }, 00:38:13.017 { 00:38:13.017 "subsystem": "accel", 00:38:13.017 "config": [ 00:38:13.017 { 00:38:13.017 "method": "accel_set_options", 00:38:13.017 "params": { 00:38:13.017 "buf_count": 2048, 00:38:13.017 "large_cache_size": 16, 00:38:13.017 "sequence_count": 2048, 00:38:13.017 "small_cache_size": 128, 00:38:13.017 "task_count": 2048 00:38:13.017 } 00:38:13.017 } 00:38:13.017 ] 00:38:13.017 }, 00:38:13.017 { 00:38:13.017 "subsystem": "bdev", 00:38:13.017 "config": [ 00:38:13.017 { 00:38:13.017 "method": "bdev_set_options", 00:38:13.017 "params": { 00:38:13.017 "bdev_auto_examine": true, 00:38:13.017 "bdev_io_cache_size": 256, 00:38:13.017 "bdev_io_pool_size": 65535, 00:38:13.017 "iobuf_large_cache_size": 16, 00:38:13.017 "iobuf_small_cache_size": 128 00:38:13.017 } 00:38:13.017 }, 00:38:13.017 { 00:38:13.017 "method": "bdev_raid_set_options", 00:38:13.017 "params": { 00:38:13.017 "process_window_size_kb": 1024 00:38:13.017 } 00:38:13.017 }, 00:38:13.017 { 00:38:13.017 "method": "bdev_iscsi_set_options", 00:38:13.017 "params": { 00:38:13.017 "timeout_sec": 30 00:38:13.017 } 00:38:13.017 }, 00:38:13.017 { 00:38:13.017 "method": "bdev_nvme_set_options", 00:38:13.017 "params": { 00:38:13.017 "action_on_timeout": "none", 00:38:13.017 "allow_accel_sequence": false, 00:38:13.017 "arbitration_burst": 0, 00:38:13.017 "bdev_retry_count": 3, 00:38:13.017 "ctrlr_loss_timeout_sec": 0, 00:38:13.017 "delay_cmd_submit": true, 00:38:13.017 "fast_io_fail_timeout_sec": 0, 00:38:13.017 "generate_uuids": false, 00:38:13.017 "high_priority_weight": 0, 00:38:13.017 "io_path_stat": false, 00:38:13.017 "io_queue_requests": 512, 00:38:13.017 "keep_alive_timeout_ms": 10000, 00:38:13.017 "low_priority_weight": 0, 00:38:13.017 "medium_priority_weight": 0, 00:38:13.017 "nvme_adminq_poll_period_us": 10000, 00:38:13.017 "nvme_ioq_poll_period_us": 0, 00:38:13.017 "reconnect_delay_sec": 0, 00:38:13.017 "timeout_admin_us": 0, 00:38:13.017 "timeout_us": 0, 00:38:13.017 "transport_ack_timeout": 0, 00:38:13.017 "transport_retry_count": 4, 00:38:13.017 "transport_tos": 0 00:38:13.017 } 00:38:13.017 }, 00:38:13.017 { 00:38:13.017 "method": "bdev_nvme_attach_controller", 00:38:13.017 "params": { 00:38:13.017 "adrfam": "IPv4", 00:38:13.017 "ctrlr_loss_timeout_sec": 0, 00:38:13.017 "ddgst": false, 00:38:13.017 "fast_io_fail_timeout_sec": 0, 00:38:13.017 "hdgst": false, 00:38:13.017 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:13.017 "name": "TLSTEST", 00:38:13.017 "prchk_guard": false, 00:38:13.017 "prchk_reftag": false, 00:38:13.017 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:38:13.017 "reconnect_delay_sec": 0, 00:38:13.017 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:13.017 "traddr": "10.0.0.2", 00:38:13.017 "trsvcid": "4420", 00:38:13.017 "trtype": "TCP" 00:38:13.017 } 00:38:13.017 }, 00:38:13.017 { 00:38:13.017 "method": "bdev_nvme_set_hotplug", 00:38:13.017 "params": { 00:38:13.017 "enable": false, 00:38:13.017 "period_us": 100000 00:38:13.017 } 00:38:13.017 }, 00:38:13.017 { 00:38:13.017 "method": "bdev_wait_for_examine" 00:38:13.017 } 00:38:13.017 ] 00:38:13.017 }, 00:38:13.017 { 00:38:13.017 "subsystem": "nbd", 00:38:13.017 "config": [] 00:38:13.017 } 00:38:13.017 ] 00:38:13.017 }' 00:38:13.277 [2024-04-17 08:34:46.351850] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:38:13.277 [2024-04-17 08:34:46.351924] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77304 ] 00:38:13.277 [2024-04-17 08:34:46.490893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:13.277 [2024-04-17 08:34:46.599883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:38:13.538 [2024-04-17 08:34:46.750727] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:14.113 08:34:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:38:14.113 08:34:47 -- common/autotest_common.sh@852 -- # return 0 00:38:14.113 08:34:47 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:38:14.113 Running I/O for 10 seconds... 00:38:24.108 00:38:24.108 Latency(us) 00:38:24.108 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:24.108 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:38:24.108 Verification LBA range: start 0x0 length 0x2000 00:38:24.108 TLSTESTn1 : 10.01 7651.80 29.89 0.00 0.00 16704.79 3691.77 20376.26 00:38:24.108 =================================================================================================================== 00:38:24.108 Total : 7651.80 29.89 0.00 0.00 16704.79 3691.77 20376.26 00:38:24.108 0 00:38:24.108 08:34:57 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:24.108 08:34:57 -- target/tls.sh@223 -- # killprocess 77304 00:38:24.108 08:34:57 -- common/autotest_common.sh@926 -- # '[' -z 77304 ']' 00:38:24.108 08:34:57 -- common/autotest_common.sh@930 -- # kill -0 77304 00:38:24.108 08:34:57 -- common/autotest_common.sh@931 -- # uname 00:38:24.108 08:34:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:38:24.108 08:34:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77304 00:38:24.108 08:34:57 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:38:24.108 08:34:57 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:38:24.108 killing process with pid 77304 00:38:24.108 08:34:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77304' 00:38:24.108 08:34:57 -- common/autotest_common.sh@945 -- # kill 77304 00:38:24.108 Received shutdown signal, test time was about 10.000000 seconds 00:38:24.108 00:38:24.108 Latency(us) 00:38:24.108 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:24.108 =================================================================================================================== 00:38:24.108 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:24.108 08:34:57 -- common/autotest_common.sh@950 -- # wait 77304 00:38:24.367 08:34:57 -- target/tls.sh@224 -- # killprocess 77260 00:38:24.367 08:34:57 -- common/autotest_common.sh@926 -- # '[' -z 77260 ']' 00:38:24.367 08:34:57 -- common/autotest_common.sh@930 -- # kill -0 77260 00:38:24.367 08:34:57 -- common/autotest_common.sh@931 -- # uname 00:38:24.367 08:34:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:38:24.367 08:34:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77260 00:38:24.367 08:34:57 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:38:24.367 08:34:57 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:38:24.367 killing process with pid 77260 00:38:24.367 08:34:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77260' 00:38:24.367 08:34:57 -- common/autotest_common.sh@945 -- # kill 77260 00:38:24.367 08:34:57 -- common/autotest_common.sh@950 -- # wait 77260 00:38:24.627 08:34:57 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:38:24.627 08:34:57 -- target/tls.sh@227 -- # cleanup 00:38:24.627 08:34:57 -- target/tls.sh@15 -- # process_shm --id 0 00:38:24.627 08:34:57 -- common/autotest_common.sh@796 -- # type=--id 00:38:24.627 08:34:57 -- common/autotest_common.sh@797 -- # id=0 00:38:24.627 08:34:57 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:38:24.627 08:34:57 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:38:24.627 08:34:57 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:38:24.627 08:34:57 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:38:24.627 08:34:57 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:38:24.627 08:34:57 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:38:24.627 nvmf_trace.0 00:38:24.627 08:34:57 -- common/autotest_common.sh@811 -- # return 0 00:38:24.627 08:34:57 -- target/tls.sh@16 -- # killprocess 77304 00:38:24.627 08:34:57 -- common/autotest_common.sh@926 -- # '[' -z 77304 ']' 00:38:24.627 08:34:57 -- common/autotest_common.sh@930 -- # kill -0 77304 00:38:24.627 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (77304) - No such process 00:38:24.627 Process with pid 77304 is not found 00:38:24.627 08:34:57 -- common/autotest_common.sh@953 -- # echo 'Process with pid 77304 is not found' 00:38:24.627 08:34:57 -- target/tls.sh@17 -- # nvmftestfini 00:38:24.627 08:34:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:38:24.627 08:34:57 -- nvmf/common.sh@116 -- # sync 00:38:24.886 08:34:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:38:24.886 08:34:57 -- nvmf/common.sh@119 -- # set +e 00:38:24.886 08:34:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:38:24.886 08:34:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:38:24.886 rmmod nvme_tcp 00:38:24.886 rmmod nvme_fabrics 00:38:24.886 rmmod nvme_keyring 00:38:24.886 08:34:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:38:24.886 08:34:58 -- nvmf/common.sh@123 -- # set -e 00:38:24.886 08:34:58 -- nvmf/common.sh@124 -- # return 0 00:38:24.886 08:34:58 -- nvmf/common.sh@477 -- # '[' -n 77260 ']' 00:38:24.886 08:34:58 -- nvmf/common.sh@478 -- # killprocess 77260 00:38:24.886 08:34:58 -- common/autotest_common.sh@926 -- # '[' -z 77260 ']' 00:38:24.886 08:34:58 -- common/autotest_common.sh@930 -- # kill -0 77260 00:38:24.886 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (77260) - No such process 00:38:24.886 Process with pid 77260 is not found 00:38:24.886 08:34:58 -- common/autotest_common.sh@953 -- # echo 'Process with pid 77260 is not found' 00:38:24.886 08:34:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:38:24.886 08:34:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:38:24.886 08:34:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:38:24.887 08:34:58 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:24.887 08:34:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:38:24.887 08:34:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:24.887 08:34:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:24.887 08:34:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:24.887 08:34:58 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:38:24.887 08:34:58 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:38:24.887 00:38:24.887 real 1m9.616s 00:38:24.887 user 1m46.930s 00:38:24.887 sys 0m23.591s 00:38:24.887 08:34:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:24.887 08:34:58 -- common/autotest_common.sh@10 -- # set +x 00:38:24.887 ************************************ 00:38:24.887 END TEST nvmf_tls 00:38:24.887 ************************************ 00:38:24.887 08:34:58 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:38:24.887 08:34:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:38:24.887 08:34:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:38:24.887 08:34:58 -- common/autotest_common.sh@10 -- # set +x 00:38:24.887 ************************************ 00:38:24.887 START TEST nvmf_fips 00:38:24.887 ************************************ 00:38:24.887 08:34:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:38:24.887 * Looking for test storage... 00:38:25.147 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:38:25.147 08:34:58 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:25.147 08:34:58 -- nvmf/common.sh@7 -- # uname -s 00:38:25.147 08:34:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:25.147 08:34:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:25.147 08:34:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:25.147 08:34:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:25.147 08:34:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:25.147 08:34:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:25.147 08:34:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:25.147 08:34:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:25.147 08:34:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:25.147 08:34:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:25.147 08:34:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:38:25.147 08:34:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:38:25.147 08:34:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:25.147 08:34:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:25.147 08:34:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:38:25.147 08:34:58 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:25.147 08:34:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:25.147 08:34:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:25.147 08:34:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:25.147 08:34:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.147 08:34:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.147 08:34:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.147 08:34:58 -- paths/export.sh@5 -- # export PATH 00:38:25.147 08:34:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.147 08:34:58 -- nvmf/common.sh@46 -- # : 0 00:38:25.147 08:34:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:38:25.147 08:34:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:38:25.147 08:34:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:38:25.147 08:34:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:25.147 08:34:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:25.147 08:34:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:38:25.147 08:34:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:38:25.147 08:34:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:38:25.147 08:34:58 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:25.147 08:34:58 -- fips/fips.sh@89 -- # check_openssl_version 00:38:25.147 08:34:58 -- fips/fips.sh@83 -- # local target=3.0.0 00:38:25.147 08:34:58 -- fips/fips.sh@85 -- # openssl version 00:38:25.147 08:34:58 -- fips/fips.sh@85 -- # awk '{print $2}' 00:38:25.147 08:34:58 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:38:25.147 08:34:58 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:38:25.147 08:34:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:38:25.147 08:34:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:38:25.147 08:34:58 -- scripts/common.sh@335 -- # IFS=.-: 00:38:25.147 08:34:58 -- scripts/common.sh@335 -- # read -ra ver1 00:38:25.147 08:34:58 -- scripts/common.sh@336 -- # IFS=.-: 00:38:25.147 08:34:58 -- scripts/common.sh@336 -- # read -ra ver2 00:38:25.147 08:34:58 -- scripts/common.sh@337 -- # local 'op=>=' 00:38:25.147 08:34:58 -- scripts/common.sh@339 -- # ver1_l=3 00:38:25.147 08:34:58 -- scripts/common.sh@340 -- # ver2_l=3 00:38:25.147 08:34:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:38:25.147 08:34:58 -- scripts/common.sh@343 -- # case "$op" in 00:38:25.147 08:34:58 -- scripts/common.sh@347 -- # : 1 00:38:25.148 08:34:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:38:25.148 08:34:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:25.148 08:34:58 -- scripts/common.sh@364 -- # decimal 3 00:38:25.148 08:34:58 -- scripts/common.sh@352 -- # local d=3 00:38:25.148 08:34:58 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:38:25.148 08:34:58 -- scripts/common.sh@354 -- # echo 3 00:38:25.148 08:34:58 -- scripts/common.sh@364 -- # ver1[v]=3 00:38:25.148 08:34:58 -- scripts/common.sh@365 -- # decimal 3 00:38:25.148 08:34:58 -- scripts/common.sh@352 -- # local d=3 00:38:25.148 08:34:58 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:38:25.148 08:34:58 -- scripts/common.sh@354 -- # echo 3 00:38:25.148 08:34:58 -- scripts/common.sh@365 -- # ver2[v]=3 00:38:25.148 08:34:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:38:25.148 08:34:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:38:25.148 08:34:58 -- scripts/common.sh@363 -- # (( v++ )) 00:38:25.148 08:34:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:25.148 08:34:58 -- scripts/common.sh@364 -- # decimal 0 00:38:25.148 08:34:58 -- scripts/common.sh@352 -- # local d=0 00:38:25.148 08:34:58 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:38:25.148 08:34:58 -- scripts/common.sh@354 -- # echo 0 00:38:25.148 08:34:58 -- scripts/common.sh@364 -- # ver1[v]=0 00:38:25.148 08:34:58 -- scripts/common.sh@365 -- # decimal 0 00:38:25.148 08:34:58 -- scripts/common.sh@352 -- # local d=0 00:38:25.148 08:34:58 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:38:25.148 08:34:58 -- scripts/common.sh@354 -- # echo 0 00:38:25.148 08:34:58 -- scripts/common.sh@365 -- # ver2[v]=0 00:38:25.148 08:34:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:38:25.148 08:34:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:38:25.148 08:34:58 -- scripts/common.sh@363 -- # (( v++ )) 00:38:25.148 08:34:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:25.148 08:34:58 -- scripts/common.sh@364 -- # decimal 9 00:38:25.148 08:34:58 -- scripts/common.sh@352 -- # local d=9 00:38:25.148 08:34:58 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:38:25.148 08:34:58 -- scripts/common.sh@354 -- # echo 9 00:38:25.148 08:34:58 -- scripts/common.sh@364 -- # ver1[v]=9 00:38:25.148 08:34:58 -- scripts/common.sh@365 -- # decimal 0 00:38:25.148 08:34:58 -- scripts/common.sh@352 -- # local d=0 00:38:25.148 08:34:58 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:38:25.148 08:34:58 -- scripts/common.sh@354 -- # echo 0 00:38:25.148 08:34:58 -- scripts/common.sh@365 -- # ver2[v]=0 00:38:25.148 08:34:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:38:25.148 08:34:58 -- scripts/common.sh@366 -- # return 0 00:38:25.148 08:34:58 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:38:25.148 08:34:58 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:38:25.148 08:34:58 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:38:25.148 08:34:58 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:38:25.148 08:34:58 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:38:25.148 08:34:58 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:38:25.148 08:34:58 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:38:25.148 08:34:58 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:38:25.148 08:34:58 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:38:25.148 08:34:58 -- fips/fips.sh@114 -- # build_openssl_config 00:38:25.148 08:34:58 -- fips/fips.sh@37 -- # cat 00:38:25.148 08:34:58 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:38:25.148 08:34:58 -- fips/fips.sh@58 -- # cat - 00:38:25.148 08:34:58 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:38:25.148 08:34:58 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:38:25.148 08:34:58 -- fips/fips.sh@117 -- # mapfile -t providers 00:38:25.148 08:34:58 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:38:25.148 08:34:58 -- fips/fips.sh@117 -- # openssl list -providers 00:38:25.148 08:34:58 -- fips/fips.sh@117 -- # grep name 00:38:25.148 08:34:58 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:38:25.148 08:34:58 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:38:25.148 08:34:58 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:38:25.148 08:34:58 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:38:25.148 08:34:58 -- fips/fips.sh@128 -- # : 00:38:25.148 08:34:58 -- common/autotest_common.sh@640 -- # local es=0 00:38:25.148 08:34:58 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:38:25.148 08:34:58 -- common/autotest_common.sh@628 -- # local arg=openssl 00:38:25.148 08:34:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:38:25.148 08:34:58 -- common/autotest_common.sh@632 -- # type -t openssl 00:38:25.148 08:34:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:38:25.148 08:34:58 -- common/autotest_common.sh@634 -- # type -P openssl 00:38:25.148 08:34:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:38:25.148 08:34:58 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:38:25.148 08:34:58 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:38:25.148 08:34:58 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:38:25.148 Error setting digest 00:38:25.148 0072D493C57F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:38:25.148 0072D493C57F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:38:25.148 08:34:58 -- common/autotest_common.sh@643 -- # es=1 00:38:25.148 08:34:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:38:25.148 08:34:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:38:25.148 08:34:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:38:25.148 08:34:58 -- fips/fips.sh@131 -- # nvmftestinit 00:38:25.148 08:34:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:38:25.148 08:34:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:25.148 08:34:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:38:25.148 08:34:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:38:25.148 08:34:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:38:25.148 08:34:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:25.148 08:34:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:25.148 08:34:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:25.148 08:34:58 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:38:25.148 08:34:58 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:38:25.148 08:34:58 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:38:25.148 08:34:58 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:38:25.148 08:34:58 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:38:25.148 08:34:58 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:38:25.148 08:34:58 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:25.148 08:34:58 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:25.148 08:34:58 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:38:25.148 08:34:58 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:38:25.148 08:34:58 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:38:25.148 08:34:58 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:38:25.148 08:34:58 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:38:25.148 08:34:58 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:25.148 08:34:58 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:38:25.148 08:34:58 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:38:25.148 08:34:58 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:38:25.148 08:34:58 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:38:25.148 08:34:58 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:38:25.408 08:34:58 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:38:25.408 Cannot find device "nvmf_tgt_br" 00:38:25.408 08:34:58 -- nvmf/common.sh@154 -- # true 00:38:25.408 08:34:58 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:38:25.408 Cannot find device "nvmf_tgt_br2" 00:38:25.408 08:34:58 -- nvmf/common.sh@155 -- # true 00:38:25.408 08:34:58 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:38:25.408 08:34:58 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:38:25.408 Cannot find device "nvmf_tgt_br" 00:38:25.408 08:34:58 -- nvmf/common.sh@157 -- # true 00:38:25.408 08:34:58 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:38:25.408 Cannot find device "nvmf_tgt_br2" 00:38:25.408 08:34:58 -- nvmf/common.sh@158 -- # true 00:38:25.408 08:34:58 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:38:25.408 08:34:58 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:38:25.408 08:34:58 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:25.408 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:25.408 08:34:58 -- nvmf/common.sh@161 -- # true 00:38:25.408 08:34:58 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:25.408 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:25.408 08:34:58 -- nvmf/common.sh@162 -- # true 00:38:25.408 08:34:58 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:38:25.408 08:34:58 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:38:25.408 08:34:58 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:38:25.408 08:34:58 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:38:25.408 08:34:58 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:38:25.408 08:34:58 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:38:25.408 08:34:58 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:38:25.408 08:34:58 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:38:25.408 08:34:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:38:25.408 08:34:58 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:38:25.408 08:34:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:38:25.408 08:34:58 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:38:25.408 08:34:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:38:25.408 08:34:58 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:38:25.408 08:34:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:38:25.408 08:34:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:38:25.408 08:34:58 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:38:25.408 08:34:58 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:38:25.408 08:34:58 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:38:25.668 08:34:58 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:38:25.668 08:34:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:38:25.668 08:34:58 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:38:25.668 08:34:58 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:38:25.668 08:34:58 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:38:25.668 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:25.668 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:38:25.668 00:38:25.668 --- 10.0.0.2 ping statistics --- 00:38:25.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:25.668 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:38:25.668 08:34:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:38:25.668 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:38:25.668 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:38:25.668 00:38:25.668 --- 10.0.0.3 ping statistics --- 00:38:25.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:25.668 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:38:25.668 08:34:58 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:38:25.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:25.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:38:25.668 00:38:25.668 --- 10.0.0.1 ping statistics --- 00:38:25.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:25.668 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:38:25.668 08:34:58 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:25.668 08:34:58 -- nvmf/common.sh@421 -- # return 0 00:38:25.668 08:34:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:38:25.668 08:34:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:25.668 08:34:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:38:25.668 08:34:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:38:25.669 08:34:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:25.669 08:34:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:38:25.669 08:34:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:38:25.669 08:34:58 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:38:25.669 08:34:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:38:25.669 08:34:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:38:25.669 08:34:58 -- common/autotest_common.sh@10 -- # set +x 00:38:25.669 08:34:58 -- nvmf/common.sh@469 -- # nvmfpid=77667 00:38:25.669 08:34:58 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:38:25.669 08:34:58 -- nvmf/common.sh@470 -- # waitforlisten 77667 00:38:25.669 08:34:58 -- common/autotest_common.sh@819 -- # '[' -z 77667 ']' 00:38:25.669 08:34:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:25.669 08:34:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:38:25.669 08:34:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:25.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:25.669 08:34:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:38:25.669 08:34:58 -- common/autotest_common.sh@10 -- # set +x 00:38:25.669 [2024-04-17 08:34:58.880612] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:38:25.669 [2024-04-17 08:34:58.880680] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:25.928 [2024-04-17 08:34:59.017081] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:25.928 [2024-04-17 08:34:59.102073] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:38:25.928 [2024-04-17 08:34:59.102190] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:25.928 [2024-04-17 08:34:59.102197] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:25.928 [2024-04-17 08:34:59.102202] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:25.928 [2024-04-17 08:34:59.102225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:38:26.496 08:34:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:38:26.496 08:34:59 -- common/autotest_common.sh@852 -- # return 0 00:38:26.496 08:34:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:38:26.496 08:34:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:38:26.496 08:34:59 -- common/autotest_common.sh@10 -- # set +x 00:38:26.496 08:34:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:26.496 08:34:59 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:38:26.496 08:34:59 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:38:26.496 08:34:59 -- fips/fips.sh@138 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:38:26.496 08:34:59 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:38:26.496 08:34:59 -- fips/fips.sh@140 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:38:26.496 08:34:59 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:38:26.496 08:34:59 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:38:26.496 08:34:59 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:26.756 [2024-04-17 08:34:59.947790] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:26.756 [2024-04-17 08:34:59.963694] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:26.756 [2024-04-17 08:34:59.963859] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:26.756 malloc0 00:38:26.756 08:35:00 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:26.756 08:35:00 -- fips/fips.sh@148 -- # bdevperf_pid=77719 00:38:26.756 08:35:00 -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:38:26.756 08:35:00 -- fips/fips.sh@149 -- # waitforlisten 77719 /var/tmp/bdevperf.sock 00:38:26.756 08:35:00 -- common/autotest_common.sh@819 -- # '[' -z 77719 ']' 00:38:26.756 08:35:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:26.756 08:35:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:38:26.756 08:35:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:26.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:26.756 08:35:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:38:26.756 08:35:00 -- common/autotest_common.sh@10 -- # set +x 00:38:27.016 [2024-04-17 08:35:00.100390] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:38:27.016 [2024-04-17 08:35:00.100480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77719 ] 00:38:27.016 [2024-04-17 08:35:00.238032] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:27.016 [2024-04-17 08:35:00.330678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:38:27.955 08:35:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:38:27.955 08:35:00 -- common/autotest_common.sh@852 -- # return 0 00:38:27.955 08:35:00 -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:38:27.955 [2024-04-17 08:35:01.150715] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:27.955 TLSTESTn1 00:38:27.955 08:35:01 -- fips/fips.sh@155 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:28.215 Running I/O for 10 seconds... 00:38:38.200 00:38:38.200 Latency(us) 00:38:38.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:38.200 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:38:38.200 Verification LBA range: start 0x0 length 0x2000 00:38:38.200 TLSTESTn1 : 10.01 7817.95 30.54 0.00 0.00 16349.09 3520.06 20376.26 00:38:38.200 =================================================================================================================== 00:38:38.200 Total : 7817.95 30.54 0.00 0.00 16349.09 3520.06 20376.26 00:38:38.200 0 00:38:38.200 08:35:11 -- fips/fips.sh@1 -- # cleanup 00:38:38.200 08:35:11 -- fips/fips.sh@15 -- # process_shm --id 0 00:38:38.200 08:35:11 -- common/autotest_common.sh@796 -- # type=--id 00:38:38.200 08:35:11 -- common/autotest_common.sh@797 -- # id=0 00:38:38.200 08:35:11 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:38:38.200 08:35:11 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:38:38.200 08:35:11 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:38:38.200 08:35:11 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:38:38.200 08:35:11 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:38:38.200 08:35:11 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:38:38.200 nvmf_trace.0 00:38:38.200 08:35:11 -- common/autotest_common.sh@811 -- # return 0 00:38:38.200 08:35:11 -- fips/fips.sh@16 -- # killprocess 77719 00:38:38.200 08:35:11 -- common/autotest_common.sh@926 -- # '[' -z 77719 ']' 00:38:38.200 08:35:11 -- common/autotest_common.sh@930 -- # kill -0 77719 00:38:38.200 08:35:11 -- common/autotest_common.sh@931 -- # uname 00:38:38.200 08:35:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:38:38.200 08:35:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77719 00:38:38.200 08:35:11 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:38:38.200 08:35:11 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:38:38.200 killing process with pid 77719 00:38:38.200 08:35:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77719' 00:38:38.200 08:35:11 -- common/autotest_common.sh@945 -- # kill 77719 00:38:38.200 Received shutdown signal, test time was about 10.000000 seconds 00:38:38.200 00:38:38.200 Latency(us) 00:38:38.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:38.200 =================================================================================================================== 00:38:38.200 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:38.200 08:35:11 -- common/autotest_common.sh@950 -- # wait 77719 00:38:38.459 08:35:11 -- fips/fips.sh@17 -- # nvmftestfini 00:38:38.459 08:35:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:38:38.459 08:35:11 -- nvmf/common.sh@116 -- # sync 00:38:38.459 08:35:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:38:38.459 08:35:11 -- nvmf/common.sh@119 -- # set +e 00:38:38.459 08:35:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:38:38.459 08:35:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:38:38.459 rmmod nvme_tcp 00:38:38.459 rmmod nvme_fabrics 00:38:38.459 rmmod nvme_keyring 00:38:38.459 08:35:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:38:38.459 08:35:11 -- nvmf/common.sh@123 -- # set -e 00:38:38.459 08:35:11 -- nvmf/common.sh@124 -- # return 0 00:38:38.716 08:35:11 -- nvmf/common.sh@477 -- # '[' -n 77667 ']' 00:38:38.716 08:35:11 -- nvmf/common.sh@478 -- # killprocess 77667 00:38:38.716 08:35:11 -- common/autotest_common.sh@926 -- # '[' -z 77667 ']' 00:38:38.716 08:35:11 -- common/autotest_common.sh@930 -- # kill -0 77667 00:38:38.717 08:35:11 -- common/autotest_common.sh@931 -- # uname 00:38:38.717 08:35:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:38:38.717 08:35:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77667 00:38:38.717 08:35:11 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:38:38.717 08:35:11 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:38:38.717 killing process with pid 77667 00:38:38.717 08:35:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77667' 00:38:38.717 08:35:11 -- common/autotest_common.sh@945 -- # kill 77667 00:38:38.717 08:35:11 -- common/autotest_common.sh@950 -- # wait 77667 00:38:38.975 08:35:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:38:38.975 08:35:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:38:38.975 08:35:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:38:38.975 08:35:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:38.975 08:35:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:38:38.975 08:35:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:38.975 08:35:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:38.975 08:35:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:38.975 08:35:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:38:38.975 08:35:12 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:38:38.975 00:38:38.975 real 0m13.983s 00:38:38.975 user 0m19.013s 00:38:38.975 sys 0m5.473s 00:38:38.975 08:35:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:38.975 08:35:12 -- common/autotest_common.sh@10 -- # set +x 00:38:38.975 ************************************ 00:38:38.975 END TEST nvmf_fips 00:38:38.975 ************************************ 00:38:38.975 08:35:12 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:38:38.975 08:35:12 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:38:38.975 08:35:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:38:38.975 08:35:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:38:38.975 08:35:12 -- common/autotest_common.sh@10 -- # set +x 00:38:38.975 ************************************ 00:38:38.975 START TEST nvmf_fuzz 00:38:38.975 ************************************ 00:38:38.975 08:35:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:38:38.975 * Looking for test storage... 00:38:38.975 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:38:38.975 08:35:12 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:38.975 08:35:12 -- nvmf/common.sh@7 -- # uname -s 00:38:38.975 08:35:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:38.975 08:35:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:38.975 08:35:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:38.975 08:35:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:38.975 08:35:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:38.975 08:35:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:38.975 08:35:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:38.975 08:35:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:38.975 08:35:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:38.975 08:35:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:38.975 08:35:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:38:38.975 08:35:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:38:38.975 08:35:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:38.975 08:35:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:38.975 08:35:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:38:38.975 08:35:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:38.975 08:35:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:38.975 08:35:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:38.975 08:35:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:38.975 08:35:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:38.976 08:35:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:38.976 08:35:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:38.976 08:35:12 -- paths/export.sh@5 -- # export PATH 00:38:38.976 08:35:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:38.976 08:35:12 -- nvmf/common.sh@46 -- # : 0 00:38:38.976 08:35:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:38:38.976 08:35:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:38:38.976 08:35:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:38:38.976 08:35:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:38.976 08:35:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:38.976 08:35:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:38:38.976 08:35:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:38:38.976 08:35:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:38:38.976 08:35:12 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:38:38.976 08:35:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:38:38.976 08:35:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:38.976 08:35:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:38:38.976 08:35:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:38:38.976 08:35:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:38:38.976 08:35:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:38.976 08:35:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:38.976 08:35:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:39.234 08:35:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:38:39.234 08:35:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:38:39.234 08:35:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:38:39.234 08:35:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:38:39.234 08:35:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:38:39.234 08:35:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:38:39.234 08:35:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:39.234 08:35:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:39.234 08:35:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:38:39.234 08:35:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:38:39.234 08:35:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:38:39.234 08:35:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:38:39.235 08:35:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:38:39.235 08:35:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:39.235 08:35:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:38:39.235 08:35:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:38:39.235 08:35:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:38:39.235 08:35:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:38:39.235 08:35:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:38:39.235 08:35:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:38:39.235 Cannot find device "nvmf_tgt_br" 00:38:39.235 08:35:12 -- nvmf/common.sh@154 -- # true 00:38:39.235 08:35:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:38:39.235 Cannot find device "nvmf_tgt_br2" 00:38:39.235 08:35:12 -- nvmf/common.sh@155 -- # true 00:38:39.235 08:35:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:38:39.235 08:35:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:38:39.235 Cannot find device "nvmf_tgt_br" 00:38:39.235 08:35:12 -- nvmf/common.sh@157 -- # true 00:38:39.235 08:35:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:38:39.235 Cannot find device "nvmf_tgt_br2" 00:38:39.235 08:35:12 -- nvmf/common.sh@158 -- # true 00:38:39.235 08:35:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:38:39.235 08:35:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:38:39.235 08:35:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:39.235 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:39.235 08:35:12 -- nvmf/common.sh@161 -- # true 00:38:39.235 08:35:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:39.235 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:39.235 08:35:12 -- nvmf/common.sh@162 -- # true 00:38:39.235 08:35:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:38:39.235 08:35:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:38:39.235 08:35:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:38:39.235 08:35:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:38:39.235 08:35:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:38:39.235 08:35:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:38:39.235 08:35:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:38:39.235 08:35:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:38:39.235 08:35:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:38:39.235 08:35:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:38:39.235 08:35:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:38:39.493 08:35:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:38:39.493 08:35:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:38:39.493 08:35:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:38:39.493 08:35:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:38:39.493 08:35:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:38:39.493 08:35:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:38:39.493 08:35:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:38:39.493 08:35:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:38:39.493 08:35:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:38:39.493 08:35:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:38:39.493 08:35:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:38:39.493 08:35:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:38:39.493 08:35:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:38:39.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:39.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:38:39.493 00:38:39.493 --- 10.0.0.2 ping statistics --- 00:38:39.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:39.493 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:38:39.493 08:35:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:38:39.493 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:38:39.493 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:38:39.493 00:38:39.493 --- 10.0.0.3 ping statistics --- 00:38:39.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:39.493 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:38:39.493 08:35:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:38:39.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:39.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:38:39.493 00:38:39.493 --- 10.0.0.1 ping statistics --- 00:38:39.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:39.493 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:38:39.493 08:35:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:39.493 08:35:12 -- nvmf/common.sh@421 -- # return 0 00:38:39.493 08:35:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:38:39.493 08:35:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:39.493 08:35:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:38:39.493 08:35:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:38:39.493 08:35:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:39.493 08:35:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:38:39.493 08:35:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:38:39.493 08:35:12 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=78061 00:38:39.493 08:35:12 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:38:39.493 08:35:12 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:38:39.493 08:35:12 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 78061 00:38:39.493 08:35:12 -- common/autotest_common.sh@819 -- # '[' -z 78061 ']' 00:38:39.493 08:35:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:39.493 08:35:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:38:39.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:39.493 08:35:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:39.493 08:35:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:38:39.493 08:35:12 -- common/autotest_common.sh@10 -- # set +x 00:38:40.430 08:35:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:38:40.430 08:35:13 -- common/autotest_common.sh@852 -- # return 0 00:38:40.430 08:35:13 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:40.430 08:35:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:40.430 08:35:13 -- common/autotest_common.sh@10 -- # set +x 00:38:40.430 08:35:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:40.430 08:35:13 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:38:40.430 08:35:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:40.430 08:35:13 -- common/autotest_common.sh@10 -- # set +x 00:38:40.430 Malloc0 00:38:40.430 08:35:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:40.430 08:35:13 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:40.430 08:35:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:40.430 08:35:13 -- common/autotest_common.sh@10 -- # set +x 00:38:40.430 08:35:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:40.430 08:35:13 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:40.430 08:35:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:40.430 08:35:13 -- common/autotest_common.sh@10 -- # set +x 00:38:40.430 08:35:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:40.430 08:35:13 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:40.430 08:35:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:40.430 08:35:13 -- common/autotest_common.sh@10 -- # set +x 00:38:40.430 08:35:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:40.430 08:35:13 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:38:40.430 08:35:13 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:38:40.999 Shutting down the fuzz application 00:38:40.999 08:35:14 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:38:41.258 Shutting down the fuzz application 00:38:41.258 08:35:14 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:41.258 08:35:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:41.258 08:35:14 -- common/autotest_common.sh@10 -- # set +x 00:38:41.258 08:35:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:41.258 08:35:14 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:38:41.258 08:35:14 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:38:41.258 08:35:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:38:41.258 08:35:14 -- nvmf/common.sh@116 -- # sync 00:38:41.258 08:35:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:38:41.258 08:35:14 -- nvmf/common.sh@119 -- # set +e 00:38:41.258 08:35:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:38:41.258 08:35:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:38:41.258 rmmod nvme_tcp 00:38:41.258 rmmod nvme_fabrics 00:38:41.258 rmmod nvme_keyring 00:38:41.258 08:35:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:38:41.258 08:35:14 -- nvmf/common.sh@123 -- # set -e 00:38:41.258 08:35:14 -- nvmf/common.sh@124 -- # return 0 00:38:41.258 08:35:14 -- nvmf/common.sh@477 -- # '[' -n 78061 ']' 00:38:41.258 08:35:14 -- nvmf/common.sh@478 -- # killprocess 78061 00:38:41.258 08:35:14 -- common/autotest_common.sh@926 -- # '[' -z 78061 ']' 00:38:41.258 08:35:14 -- common/autotest_common.sh@930 -- # kill -0 78061 00:38:41.258 08:35:14 -- common/autotest_common.sh@931 -- # uname 00:38:41.258 08:35:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:38:41.258 08:35:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78061 00:38:41.258 08:35:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:38:41.258 08:35:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:38:41.258 08:35:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78061' 00:38:41.258 killing process with pid 78061 00:38:41.258 08:35:14 -- common/autotest_common.sh@945 -- # kill 78061 00:38:41.258 08:35:14 -- common/autotest_common.sh@950 -- # wait 78061 00:38:41.517 08:35:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:38:41.517 08:35:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:38:41.517 08:35:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:38:41.517 08:35:14 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:41.517 08:35:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:38:41.517 08:35:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:41.517 08:35:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:41.517 08:35:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:41.777 08:35:14 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:38:41.777 08:35:14 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:38:41.777 00:38:41.777 real 0m2.707s 00:38:41.777 user 0m2.772s 00:38:41.777 sys 0m0.682s 00:38:41.777 08:35:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:41.777 08:35:14 -- common/autotest_common.sh@10 -- # set +x 00:38:41.777 ************************************ 00:38:41.777 END TEST nvmf_fuzz 00:38:41.777 ************************************ 00:38:41.777 08:35:14 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:38:41.777 08:35:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:38:41.777 08:35:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:38:41.777 08:35:14 -- common/autotest_common.sh@10 -- # set +x 00:38:41.777 ************************************ 00:38:41.777 START TEST nvmf_multiconnection 00:38:41.777 ************************************ 00:38:41.777 08:35:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:38:41.777 * Looking for test storage... 00:38:41.777 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:38:41.777 08:35:15 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:41.777 08:35:15 -- nvmf/common.sh@7 -- # uname -s 00:38:41.777 08:35:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:41.777 08:35:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:41.777 08:35:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:41.777 08:35:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:41.777 08:35:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:41.777 08:35:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:41.777 08:35:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:41.777 08:35:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:41.777 08:35:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:41.777 08:35:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:41.777 08:35:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:38:41.777 08:35:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:38:41.777 08:35:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:41.777 08:35:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:41.777 08:35:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:38:41.777 08:35:15 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:41.777 08:35:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:41.777 08:35:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:41.777 08:35:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:41.777 08:35:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.777 08:35:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.777 08:35:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.777 08:35:15 -- paths/export.sh@5 -- # export PATH 00:38:41.777 08:35:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.777 08:35:15 -- nvmf/common.sh@46 -- # : 0 00:38:41.777 08:35:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:38:41.777 08:35:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:38:41.777 08:35:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:38:41.777 08:35:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:41.777 08:35:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:41.777 08:35:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:38:41.777 08:35:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:38:41.777 08:35:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:38:41.777 08:35:15 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:41.777 08:35:15 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:41.777 08:35:15 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:38:41.777 08:35:15 -- target/multiconnection.sh@16 -- # nvmftestinit 00:38:41.777 08:35:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:38:41.777 08:35:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:41.777 08:35:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:38:41.777 08:35:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:38:41.777 08:35:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:38:41.777 08:35:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:41.777 08:35:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:41.777 08:35:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:41.777 08:35:15 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:38:41.777 08:35:15 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:38:41.777 08:35:15 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:38:41.777 08:35:15 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:38:41.777 08:35:15 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:38:41.777 08:35:15 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:38:41.777 08:35:15 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:41.777 08:35:15 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:41.777 08:35:15 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:38:41.777 08:35:15 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:38:41.777 08:35:15 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:38:41.777 08:35:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:38:41.777 08:35:15 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:38:41.777 08:35:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:41.777 08:35:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:38:41.778 08:35:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:38:41.778 08:35:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:38:41.778 08:35:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:38:41.778 08:35:15 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:38:42.035 08:35:15 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:38:42.035 Cannot find device "nvmf_tgt_br" 00:38:42.035 08:35:15 -- nvmf/common.sh@154 -- # true 00:38:42.035 08:35:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:38:42.035 Cannot find device "nvmf_tgt_br2" 00:38:42.035 08:35:15 -- nvmf/common.sh@155 -- # true 00:38:42.035 08:35:15 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:38:42.035 08:35:15 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:38:42.035 Cannot find device "nvmf_tgt_br" 00:38:42.035 08:35:15 -- nvmf/common.sh@157 -- # true 00:38:42.035 08:35:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:38:42.035 Cannot find device "nvmf_tgt_br2" 00:38:42.035 08:35:15 -- nvmf/common.sh@158 -- # true 00:38:42.035 08:35:15 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:38:42.035 08:35:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:38:42.035 08:35:15 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:42.035 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:42.035 08:35:15 -- nvmf/common.sh@161 -- # true 00:38:42.035 08:35:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:42.035 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:42.035 08:35:15 -- nvmf/common.sh@162 -- # true 00:38:42.035 08:35:15 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:38:42.035 08:35:15 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:38:42.035 08:35:15 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:38:42.035 08:35:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:38:42.035 08:35:15 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:38:42.035 08:35:15 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:38:42.035 08:35:15 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:38:42.035 08:35:15 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:38:42.035 08:35:15 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:38:42.035 08:35:15 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:38:42.035 08:35:15 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:38:42.035 08:35:15 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:38:42.035 08:35:15 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:38:42.035 08:35:15 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:38:42.035 08:35:15 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:38:42.294 08:35:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:38:42.294 08:35:15 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:38:42.294 08:35:15 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:38:42.294 08:35:15 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:38:42.294 08:35:15 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:38:42.294 08:35:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:38:42.294 08:35:15 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:38:42.294 08:35:15 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:38:42.294 08:35:15 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:38:42.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:42.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:38:42.294 00:38:42.294 --- 10.0.0.2 ping statistics --- 00:38:42.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:42.294 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:38:42.294 08:35:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:38:42.294 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:38:42.294 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:38:42.294 00:38:42.294 --- 10.0.0.3 ping statistics --- 00:38:42.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:42.294 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:38:42.294 08:35:15 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:38:42.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:42.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:38:42.294 00:38:42.294 --- 10.0.0.1 ping statistics --- 00:38:42.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:42.294 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:38:42.294 08:35:15 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:42.294 08:35:15 -- nvmf/common.sh@421 -- # return 0 00:38:42.294 08:35:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:38:42.294 08:35:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:42.294 08:35:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:38:42.294 08:35:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:38:42.294 08:35:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:42.294 08:35:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:38:42.294 08:35:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:38:42.294 08:35:15 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:38:42.294 08:35:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:38:42.294 08:35:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:38:42.294 08:35:15 -- common/autotest_common.sh@10 -- # set +x 00:38:42.294 08:35:15 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:38:42.294 08:35:15 -- nvmf/common.sh@469 -- # nvmfpid=78268 00:38:42.294 08:35:15 -- nvmf/common.sh@470 -- # waitforlisten 78268 00:38:42.294 08:35:15 -- common/autotest_common.sh@819 -- # '[' -z 78268 ']' 00:38:42.294 08:35:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:42.294 08:35:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:38:42.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:42.294 08:35:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:42.294 08:35:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:38:42.294 08:35:15 -- common/autotest_common.sh@10 -- # set +x 00:38:42.294 [2024-04-17 08:35:15.521289] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:38:42.294 [2024-04-17 08:35:15.521366] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:42.553 [2024-04-17 08:35:15.661864] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:42.553 [2024-04-17 08:35:15.760962] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:38:42.553 [2024-04-17 08:35:15.761091] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:42.553 [2024-04-17 08:35:15.761099] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:42.553 [2024-04-17 08:35:15.761104] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:42.553 [2024-04-17 08:35:15.761274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:38:42.553 [2024-04-17 08:35:15.762509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:38:42.553 [2024-04-17 08:35:15.762623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:42.553 [2024-04-17 08:35:15.762627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:38:43.121 08:35:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:38:43.121 08:35:16 -- common/autotest_common.sh@852 -- # return 0 00:38:43.121 08:35:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:38:43.121 08:35:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:38:43.121 08:35:16 -- common/autotest_common.sh@10 -- # set +x 00:38:43.121 08:35:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:43.121 08:35:16 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:43.121 08:35:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.121 08:35:16 -- common/autotest_common.sh@10 -- # set +x 00:38:43.121 [2024-04-17 08:35:16.420366] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:43.121 08:35:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.121 08:35:16 -- target/multiconnection.sh@21 -- # seq 1 11 00:38:43.121 08:35:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:38:43.121 08:35:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:38:43.121 08:35:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.121 08:35:16 -- common/autotest_common.sh@10 -- # set +x 00:38:43.380 Malloc1 00:38:43.380 08:35:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.380 08:35:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:38:43.380 08:35:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.380 08:35:16 -- common/autotest_common.sh@10 -- # set +x 00:38:43.380 08:35:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.380 08:35:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:38:43.380 08:35:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.380 08:35:16 -- common/autotest_common.sh@10 -- # set +x 00:38:43.380 08:35:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.380 08:35:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:43.380 08:35:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.380 08:35:16 -- common/autotest_common.sh@10 -- # set +x 00:38:43.380 [2024-04-17 08:35:16.504908] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:43.380 08:35:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.380 08:35:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:38:43.380 08:35:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:38:43.380 08:35:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.380 08:35:16 -- common/autotest_common.sh@10 -- # set +x 00:38:43.380 Malloc2 00:38:43.380 08:35:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.380 08:35:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:38:43.380 08:35:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.380 08:35:16 -- common/autotest_common.sh@10 -- # set +x 00:38:43.380 08:35:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.380 08:35:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:38:43.380 08:35:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.380 08:35:16 -- common/autotest_common.sh@10 -- # set +x 00:38:43.380 08:35:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.380 08:35:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:38:43.380 08:35:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.380 08:35:16 -- common/autotest_common.sh@10 -- # set +x 00:38:43.380 08:35:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.380 08:35:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:38:43.380 08:35:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:38:43.380 08:35:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.380 08:35:16 -- common/autotest_common.sh@10 -- # set +x 00:38:43.380 Malloc3 00:38:43.380 08:35:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.380 08:35:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:38:43.380 08:35:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.380 08:35:16 -- common/autotest_common.sh@10 -- # set +x 00:38:43.380 08:35:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.380 08:35:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:38:43.380 08:35:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.380 08:35:16 -- common/autotest_common.sh@10 -- # set +x 00:38:43.380 08:35:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.380 08:35:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:38:43.380 08:35:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.380 08:35:16 -- common/autotest_common.sh@10 -- # set +x 00:38:43.380 08:35:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.380 08:35:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:38:43.380 08:35:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:38:43.380 08:35:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.380 08:35:16 -- common/autotest_common.sh@10 -- # set +x 00:38:43.380 Malloc4 00:38:43.380 08:35:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.380 08:35:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:38:43.380 08:35:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.380 08:35:16 -- common/autotest_common.sh@10 -- # set +x 00:38:43.380 08:35:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.380 08:35:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:38:43.380 08:35:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.380 08:35:16 -- common/autotest_common.sh@10 -- # set +x 00:38:43.380 08:35:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.380 08:35:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:38:43.380 08:35:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.380 08:35:16 -- common/autotest_common.sh@10 -- # set +x 00:38:43.380 08:35:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.380 08:35:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:38:43.380 08:35:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:38:43.380 08:35:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.380 08:35:16 -- common/autotest_common.sh@10 -- # set +x 00:38:43.380 Malloc5 00:38:43.380 08:35:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.380 08:35:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:38:43.380 08:35:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.380 08:35:16 -- common/autotest_common.sh@10 -- # set +x 00:38:43.640 08:35:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.640 08:35:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:38:43.640 08:35:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.640 08:35:16 -- common/autotest_common.sh@10 -- # set +x 00:38:43.640 08:35:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.640 08:35:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:38:43.640 08:35:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.640 08:35:16 -- common/autotest_common.sh@10 -- # set +x 00:38:43.640 08:35:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.640 08:35:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:38:43.640 08:35:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:38:43.640 08:35:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.640 08:35:16 -- common/autotest_common.sh@10 -- # set +x 00:38:43.640 Malloc6 00:38:43.640 08:35:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.640 08:35:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:38:43.640 08:35:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.640 08:35:16 -- common/autotest_common.sh@10 -- # set +x 00:38:43.640 08:35:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.640 08:35:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:38:43.640 08:35:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.640 08:35:16 -- common/autotest_common.sh@10 -- # set +x 00:38:43.640 08:35:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.640 08:35:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:38:43.640 08:35:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.640 08:35:16 -- common/autotest_common.sh@10 -- # set +x 00:38:43.640 08:35:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.640 08:35:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:38:43.640 08:35:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:38:43.640 08:35:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.640 08:35:16 -- common/autotest_common.sh@10 -- # set +x 00:38:43.640 Malloc7 00:38:43.640 08:35:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.640 08:35:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:38:43.640 08:35:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.640 08:35:16 -- common/autotest_common.sh@10 -- # set +x 00:38:43.640 08:35:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.640 08:35:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:38:43.640 08:35:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.640 08:35:16 -- common/autotest_common.sh@10 -- # set +x 00:38:43.640 08:35:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.640 08:35:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:38:43.640 08:35:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.640 08:35:16 -- common/autotest_common.sh@10 -- # set +x 00:38:43.640 08:35:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.640 08:35:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:38:43.640 08:35:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:38:43.640 08:35:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.640 08:35:16 -- common/autotest_common.sh@10 -- # set +x 00:38:43.640 Malloc8 00:38:43.640 08:35:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.640 08:35:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:38:43.640 08:35:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.640 08:35:16 -- common/autotest_common.sh@10 -- # set +x 00:38:43.640 08:35:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.640 08:35:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:38:43.640 08:35:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.640 08:35:16 -- common/autotest_common.sh@10 -- # set +x 00:38:43.640 08:35:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.640 08:35:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:38:43.640 08:35:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.640 08:35:16 -- common/autotest_common.sh@10 -- # set +x 00:38:43.640 08:35:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.640 08:35:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:38:43.640 08:35:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:38:43.640 08:35:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.640 08:35:16 -- common/autotest_common.sh@10 -- # set +x 00:38:43.640 Malloc9 00:38:43.640 08:35:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.640 08:35:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:38:43.640 08:35:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.640 08:35:16 -- common/autotest_common.sh@10 -- # set +x 00:38:43.640 08:35:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.640 08:35:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:38:43.640 08:35:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.640 08:35:16 -- common/autotest_common.sh@10 -- # set +x 00:38:43.900 08:35:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.900 08:35:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:38:43.900 08:35:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.900 08:35:16 -- common/autotest_common.sh@10 -- # set +x 00:38:43.900 08:35:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.900 08:35:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:38:43.900 08:35:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:38:43.900 08:35:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.900 08:35:16 -- common/autotest_common.sh@10 -- # set +x 00:38:43.900 Malloc10 00:38:43.900 08:35:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.900 08:35:17 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:38:43.900 08:35:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.900 08:35:17 -- common/autotest_common.sh@10 -- # set +x 00:38:43.900 08:35:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.901 08:35:17 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:38:43.901 08:35:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.901 08:35:17 -- common/autotest_common.sh@10 -- # set +x 00:38:43.901 08:35:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.901 08:35:17 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:38:43.901 08:35:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.901 08:35:17 -- common/autotest_common.sh@10 -- # set +x 00:38:43.901 08:35:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.901 08:35:17 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:38:43.901 08:35:17 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:38:43.901 08:35:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.901 08:35:17 -- common/autotest_common.sh@10 -- # set +x 00:38:43.901 Malloc11 00:38:43.901 08:35:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.901 08:35:17 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:38:43.901 08:35:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.901 08:35:17 -- common/autotest_common.sh@10 -- # set +x 00:38:43.901 08:35:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.901 08:35:17 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:38:43.901 08:35:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.901 08:35:17 -- common/autotest_common.sh@10 -- # set +x 00:38:43.901 08:35:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.901 08:35:17 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:38:43.901 08:35:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:43.901 08:35:17 -- common/autotest_common.sh@10 -- # set +x 00:38:43.901 08:35:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:43.901 08:35:17 -- target/multiconnection.sh@28 -- # seq 1 11 00:38:43.901 08:35:17 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:38:43.901 08:35:17 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:38:44.160 08:35:17 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:38:44.160 08:35:17 -- common/autotest_common.sh@1177 -- # local i=0 00:38:44.160 08:35:17 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:38:44.160 08:35:17 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:38:44.160 08:35:17 -- common/autotest_common.sh@1184 -- # sleep 2 00:38:46.063 08:35:19 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:38:46.063 08:35:19 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:38:46.063 08:35:19 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:38:46.063 08:35:19 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:38:46.063 08:35:19 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:38:46.063 08:35:19 -- common/autotest_common.sh@1187 -- # return 0 00:38:46.063 08:35:19 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:38:46.063 08:35:19 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:38:46.321 08:35:19 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:38:46.321 08:35:19 -- common/autotest_common.sh@1177 -- # local i=0 00:38:46.321 08:35:19 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:38:46.321 08:35:19 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:38:46.321 08:35:19 -- common/autotest_common.sh@1184 -- # sleep 2 00:38:48.225 08:35:21 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:38:48.225 08:35:21 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:38:48.225 08:35:21 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:38:48.225 08:35:21 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:38:48.225 08:35:21 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:38:48.225 08:35:21 -- common/autotest_common.sh@1187 -- # return 0 00:38:48.225 08:35:21 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:38:48.225 08:35:21 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:38:48.484 08:35:21 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:38:48.484 08:35:21 -- common/autotest_common.sh@1177 -- # local i=0 00:38:48.484 08:35:21 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:38:48.484 08:35:21 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:38:48.484 08:35:21 -- common/autotest_common.sh@1184 -- # sleep 2 00:38:50.391 08:35:23 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:38:50.391 08:35:23 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:38:50.391 08:35:23 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:38:50.391 08:35:23 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:38:50.391 08:35:23 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:38:50.391 08:35:23 -- common/autotest_common.sh@1187 -- # return 0 00:38:50.391 08:35:23 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:38:50.391 08:35:23 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:38:50.650 08:35:23 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:38:50.650 08:35:23 -- common/autotest_common.sh@1177 -- # local i=0 00:38:50.650 08:35:23 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:38:50.650 08:35:23 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:38:50.650 08:35:23 -- common/autotest_common.sh@1184 -- # sleep 2 00:38:52.554 08:35:25 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:38:52.554 08:35:25 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:38:52.554 08:35:25 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:38:52.554 08:35:25 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:38:52.554 08:35:25 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:38:52.814 08:35:25 -- common/autotest_common.sh@1187 -- # return 0 00:38:52.814 08:35:25 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:38:52.814 08:35:25 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:38:52.814 08:35:26 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:38:52.814 08:35:26 -- common/autotest_common.sh@1177 -- # local i=0 00:38:52.814 08:35:26 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:38:52.814 08:35:26 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:38:52.814 08:35:26 -- common/autotest_common.sh@1184 -- # sleep 2 00:38:55.355 08:35:28 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:38:55.355 08:35:28 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:38:55.355 08:35:28 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:38:55.355 08:35:28 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:38:55.355 08:35:28 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:38:55.355 08:35:28 -- common/autotest_common.sh@1187 -- # return 0 00:38:55.355 08:35:28 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:38:55.355 08:35:28 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:38:55.355 08:35:28 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:38:55.355 08:35:28 -- common/autotest_common.sh@1177 -- # local i=0 00:38:55.355 08:35:28 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:38:55.355 08:35:28 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:38:55.355 08:35:28 -- common/autotest_common.sh@1184 -- # sleep 2 00:38:57.266 08:35:30 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:38:57.266 08:35:30 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:38:57.266 08:35:30 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:38:57.266 08:35:30 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:38:57.266 08:35:30 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:38:57.266 08:35:30 -- common/autotest_common.sh@1187 -- # return 0 00:38:57.266 08:35:30 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:38:57.267 08:35:30 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:38:57.267 08:35:30 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:38:57.267 08:35:30 -- common/autotest_common.sh@1177 -- # local i=0 00:38:57.267 08:35:30 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:38:57.267 08:35:30 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:38:57.267 08:35:30 -- common/autotest_common.sh@1184 -- # sleep 2 00:38:59.177 08:35:32 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:38:59.177 08:35:32 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:38:59.177 08:35:32 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:38:59.177 08:35:32 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:38:59.177 08:35:32 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:38:59.177 08:35:32 -- common/autotest_common.sh@1187 -- # return 0 00:38:59.177 08:35:32 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:38:59.177 08:35:32 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:38:59.436 08:35:32 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:38:59.436 08:35:32 -- common/autotest_common.sh@1177 -- # local i=0 00:38:59.436 08:35:32 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:38:59.436 08:35:32 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:38:59.436 08:35:32 -- common/autotest_common.sh@1184 -- # sleep 2 00:39:01.379 08:35:34 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:39:01.379 08:35:34 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:39:01.379 08:35:34 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:39:01.379 08:35:34 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:39:01.379 08:35:34 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:39:01.379 08:35:34 -- common/autotest_common.sh@1187 -- # return 0 00:39:01.379 08:35:34 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:39:01.379 08:35:34 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:39:01.639 08:35:34 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:39:01.639 08:35:34 -- common/autotest_common.sh@1177 -- # local i=0 00:39:01.639 08:35:34 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:39:01.639 08:35:34 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:39:01.639 08:35:34 -- common/autotest_common.sh@1184 -- # sleep 2 00:39:04.176 08:35:36 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:39:04.176 08:35:36 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:39:04.176 08:35:36 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:39:04.176 08:35:36 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:39:04.176 08:35:36 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:39:04.176 08:35:36 -- common/autotest_common.sh@1187 -- # return 0 00:39:04.176 08:35:36 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:39:04.176 08:35:36 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:39:04.176 08:35:37 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:39:04.176 08:35:37 -- common/autotest_common.sh@1177 -- # local i=0 00:39:04.176 08:35:37 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:39:04.176 08:35:37 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:39:04.176 08:35:37 -- common/autotest_common.sh@1184 -- # sleep 2 00:39:06.104 08:35:39 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:39:06.104 08:35:39 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:39:06.104 08:35:39 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:39:06.104 08:35:39 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:39:06.104 08:35:39 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:39:06.104 08:35:39 -- common/autotest_common.sh@1187 -- # return 0 00:39:06.104 08:35:39 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:39:06.104 08:35:39 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:39:06.104 08:35:39 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:39:06.104 08:35:39 -- common/autotest_common.sh@1177 -- # local i=0 00:39:06.104 08:35:39 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:39:06.104 08:35:39 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:39:06.104 08:35:39 -- common/autotest_common.sh@1184 -- # sleep 2 00:39:08.021 08:35:41 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:39:08.021 08:35:41 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:39:08.021 08:35:41 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:39:08.021 08:35:41 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:39:08.021 08:35:41 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:39:08.021 08:35:41 -- common/autotest_common.sh@1187 -- # return 0 00:39:08.021 08:35:41 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:39:08.021 [global] 00:39:08.021 thread=1 00:39:08.021 invalidate=1 00:39:08.021 rw=read 00:39:08.021 time_based=1 00:39:08.021 runtime=10 00:39:08.021 ioengine=libaio 00:39:08.021 direct=1 00:39:08.021 bs=262144 00:39:08.021 iodepth=64 00:39:08.021 norandommap=1 00:39:08.021 numjobs=1 00:39:08.021 00:39:08.021 [job0] 00:39:08.021 filename=/dev/nvme0n1 00:39:08.021 [job1] 00:39:08.021 filename=/dev/nvme10n1 00:39:08.021 [job2] 00:39:08.021 filename=/dev/nvme1n1 00:39:08.021 [job3] 00:39:08.021 filename=/dev/nvme2n1 00:39:08.021 [job4] 00:39:08.021 filename=/dev/nvme3n1 00:39:08.021 [job5] 00:39:08.021 filename=/dev/nvme4n1 00:39:08.278 [job6] 00:39:08.278 filename=/dev/nvme5n1 00:39:08.278 [job7] 00:39:08.278 filename=/dev/nvme6n1 00:39:08.278 [job8] 00:39:08.278 filename=/dev/nvme7n1 00:39:08.278 [job9] 00:39:08.278 filename=/dev/nvme8n1 00:39:08.278 [job10] 00:39:08.278 filename=/dev/nvme9n1 00:39:08.278 Could not set queue depth (nvme0n1) 00:39:08.278 Could not set queue depth (nvme10n1) 00:39:08.278 Could not set queue depth (nvme1n1) 00:39:08.278 Could not set queue depth (nvme2n1) 00:39:08.278 Could not set queue depth (nvme3n1) 00:39:08.278 Could not set queue depth (nvme4n1) 00:39:08.278 Could not set queue depth (nvme5n1) 00:39:08.278 Could not set queue depth (nvme6n1) 00:39:08.278 Could not set queue depth (nvme7n1) 00:39:08.278 Could not set queue depth (nvme8n1) 00:39:08.278 Could not set queue depth (nvme9n1) 00:39:08.538 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:39:08.538 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:39:08.538 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:39:08.538 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:39:08.538 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:39:08.538 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:39:08.538 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:39:08.538 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:39:08.538 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:39:08.538 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:39:08.538 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:39:08.538 fio-3.35 00:39:08.538 Starting 11 threads 00:39:20.779 00:39:20.779 job0: (groupid=0, jobs=1): err= 0: pid=78750: Wed Apr 17 08:35:51 2024 00:39:20.779 read: IOPS=388, BW=97.2MiB/s (102MB/s)(985MiB/10129msec) 00:39:20.779 slat (usec): min=24, max=114294, avg=2525.39, stdev=9038.64 00:39:20.779 clat (msec): min=19, max=298, avg=161.67, stdev=48.82 00:39:20.779 lat (msec): min=19, max=303, avg=164.20, stdev=50.11 00:39:20.779 clat percentiles (msec): 00:39:20.779 | 1.00th=[ 34], 5.00th=[ 55], 10.00th=[ 95], 20.00th=[ 121], 00:39:20.779 | 30.00th=[ 144], 40.00th=[ 161], 50.00th=[ 171], 60.00th=[ 182], 00:39:20.779 | 70.00th=[ 194], 80.00th=[ 203], 90.00th=[ 215], 95.00th=[ 222], 00:39:20.779 | 99.00th=[ 247], 99.50th=[ 247], 99.90th=[ 279], 99.95th=[ 279], 00:39:20.779 | 99.99th=[ 300] 00:39:20.779 bw ( KiB/s): min=76341, max=211968, per=6.84%, avg=99152.35, stdev=32488.06, samples=20 00:39:20.779 iops : min= 298, max= 828, avg=387.20, stdev=126.96, samples=20 00:39:20.779 lat (msec) : 20=0.08%, 50=4.32%, 100=7.03%, 250=88.25%, 500=0.33% 00:39:20.779 cpu : usr=0.18%, sys=2.30%, ctx=841, majf=0, minf=4097 00:39:20.779 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:39:20.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.779 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:39:20.779 issued rwts: total=3939,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:20.779 latency : target=0, window=0, percentile=100.00%, depth=64 00:39:20.779 job1: (groupid=0, jobs=1): err= 0: pid=78751: Wed Apr 17 08:35:51 2024 00:39:20.779 read: IOPS=385, BW=96.5MiB/s (101MB/s)(978MiB/10133msec) 00:39:20.779 slat (usec): min=14, max=91003, avg=2515.77, stdev=8385.57 00:39:20.779 clat (msec): min=30, max=311, avg=162.96, stdev=52.35 00:39:20.779 lat (msec): min=30, max=311, avg=165.48, stdev=53.56 00:39:20.779 clat percentiles (msec): 00:39:20.779 | 1.00th=[ 47], 5.00th=[ 73], 10.00th=[ 82], 20.00th=[ 108], 00:39:20.779 | 30.00th=[ 134], 40.00th=[ 163], 50.00th=[ 176], 60.00th=[ 188], 00:39:20.779 | 70.00th=[ 201], 80.00th=[ 209], 90.00th=[ 222], 95.00th=[ 230], 00:39:20.779 | 99.00th=[ 259], 99.50th=[ 268], 99.90th=[ 313], 99.95th=[ 313], 00:39:20.779 | 99.99th=[ 313] 00:39:20.779 bw ( KiB/s): min=72192, max=196096, per=6.78%, avg=98361.70, stdev=32921.95, samples=20 00:39:20.779 iops : min= 282, max= 766, avg=384.05, stdev=128.60, samples=20 00:39:20.779 lat (msec) : 50=1.18%, 100=16.50%, 250=81.18%, 500=1.15% 00:39:20.779 cpu : usr=0.11%, sys=2.26%, ctx=780, majf=0, minf=4097 00:39:20.779 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:39:20.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.779 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:39:20.779 issued rwts: total=3910,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:20.779 latency : target=0, window=0, percentile=100.00%, depth=64 00:39:20.779 job2: (groupid=0, jobs=1): err= 0: pid=78752: Wed Apr 17 08:35:51 2024 00:39:20.779 read: IOPS=689, BW=172MiB/s (181MB/s)(1733MiB/10059msec) 00:39:20.779 slat (usec): min=15, max=58593, avg=1427.32, stdev=4905.37 00:39:20.779 clat (msec): min=26, max=193, avg=91.24, stdev=22.49 00:39:20.779 lat (msec): min=28, max=193, avg=92.67, stdev=23.04 00:39:20.779 clat percentiles (msec): 00:39:20.779 | 1.00th=[ 39], 5.00th=[ 51], 10.00th=[ 63], 20.00th=[ 75], 00:39:20.779 | 30.00th=[ 81], 40.00th=[ 86], 50.00th=[ 91], 60.00th=[ 96], 00:39:20.779 | 70.00th=[ 104], 80.00th=[ 112], 90.00th=[ 121], 95.00th=[ 128], 00:39:20.779 | 99.00th=[ 138], 99.50th=[ 144], 99.90th=[ 155], 99.95th=[ 174], 00:39:20.779 | 99.99th=[ 194] 00:39:20.779 bw ( KiB/s): min=128512, max=266240, per=12.12%, avg=175731.95, stdev=35560.42, samples=20 00:39:20.779 iops : min= 502, max= 1040, avg=686.30, stdev=139.01, samples=20 00:39:20.779 lat (msec) : 50=4.99%, 100=61.46%, 250=33.54% 00:39:20.779 cpu : usr=0.30%, sys=3.76%, ctx=1666, majf=0, minf=4097 00:39:20.779 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:39:20.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.779 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:39:20.779 issued rwts: total=6931,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:20.779 latency : target=0, window=0, percentile=100.00%, depth=64 00:39:20.779 job3: (groupid=0, jobs=1): err= 0: pid=78753: Wed Apr 17 08:35:51 2024 00:39:20.779 read: IOPS=532, BW=133MiB/s (139MB/s)(1341MiB/10080msec) 00:39:20.779 slat (usec): min=18, max=77501, avg=1839.42, stdev=6479.02 00:39:20.779 clat (msec): min=20, max=266, avg=118.24, stdev=30.11 00:39:20.779 lat (msec): min=20, max=274, avg=120.08, stdev=30.92 00:39:20.779 clat percentiles (msec): 00:39:20.779 | 1.00th=[ 33], 5.00th=[ 77], 10.00th=[ 85], 20.00th=[ 99], 00:39:20.779 | 30.00th=[ 107], 40.00th=[ 111], 50.00th=[ 115], 60.00th=[ 122], 00:39:20.779 | 70.00th=[ 127], 80.00th=[ 136], 90.00th=[ 155], 95.00th=[ 171], 00:39:20.779 | 99.00th=[ 220], 99.50th=[ 232], 99.90th=[ 257], 99.95th=[ 257], 00:39:20.779 | 99.99th=[ 268] 00:39:20.779 bw ( KiB/s): min=84480, max=206848, per=9.34%, avg=135510.25, stdev=27233.15, samples=20 00:39:20.779 iops : min= 330, max= 808, avg=529.10, stdev=106.30, samples=20 00:39:20.779 lat (msec) : 50=1.53%, 100=20.08%, 250=78.20%, 500=0.19% 00:39:20.779 cpu : usr=0.22%, sys=2.94%, ctx=1256, majf=0, minf=4097 00:39:20.779 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:39:20.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.779 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:39:20.779 issued rwts: total=5363,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:20.779 latency : target=0, window=0, percentile=100.00%, depth=64 00:39:20.779 job4: (groupid=0, jobs=1): err= 0: pid=78754: Wed Apr 17 08:35:51 2024 00:39:20.779 read: IOPS=811, BW=203MiB/s (213MB/s)(2031MiB/10012msec) 00:39:20.779 slat (usec): min=15, max=73546, avg=1154.70, stdev=4584.03 00:39:20.779 clat (msec): min=9, max=203, avg=77.54, stdev=40.53 00:39:20.779 lat (msec): min=9, max=250, avg=78.69, stdev=41.23 00:39:20.779 clat percentiles (msec): 00:39:20.779 | 1.00th=[ 20], 5.00th=[ 29], 10.00th=[ 32], 20.00th=[ 38], 00:39:20.779 | 30.00th=[ 43], 40.00th=[ 50], 50.00th=[ 74], 60.00th=[ 93], 00:39:20.779 | 70.00th=[ 109], 80.00th=[ 120], 90.00th=[ 132], 95.00th=[ 144], 00:39:20.779 | 99.00th=[ 163], 99.50th=[ 165], 99.90th=[ 178], 99.95th=[ 186], 00:39:20.779 | 99.99th=[ 203] 00:39:20.779 bw ( KiB/s): min=108544, max=414720, per=13.58%, avg=196967.37, stdev=105624.72, samples=19 00:39:20.779 iops : min= 424, max= 1620, avg=769.16, stdev=412.72, samples=19 00:39:20.779 lat (msec) : 10=0.11%, 20=1.19%, 50=39.49%, 100=22.46%, 250=36.75% 00:39:20.779 cpu : usr=0.34%, sys=4.41%, ctx=1857, majf=0, minf=4097 00:39:20.779 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:39:20.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.779 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:39:20.779 issued rwts: total=8122,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:20.779 latency : target=0, window=0, percentile=100.00%, depth=64 00:39:20.779 job5: (groupid=0, jobs=1): err= 0: pid=78755: Wed Apr 17 08:35:51 2024 00:39:20.779 read: IOPS=358, BW=89.6MiB/s (93.9MB/s)(907MiB/10131msec) 00:39:20.779 slat (usec): min=25, max=89187, avg=2668.48, stdev=9388.18 00:39:20.779 clat (msec): min=28, max=311, avg=175.48, stdev=40.48 00:39:20.779 lat (msec): min=29, max=311, avg=178.15, stdev=41.86 00:39:20.779 clat percentiles (msec): 00:39:20.779 | 1.00th=[ 53], 5.00th=[ 102], 10.00th=[ 125], 20.00th=[ 150], 00:39:20.779 | 30.00th=[ 161], 40.00th=[ 171], 50.00th=[ 180], 60.00th=[ 188], 00:39:20.779 | 70.00th=[ 199], 80.00th=[ 207], 90.00th=[ 220], 95.00th=[ 230], 00:39:20.779 | 99.00th=[ 264], 99.50th=[ 284], 99.90th=[ 296], 99.95th=[ 313], 00:39:20.779 | 99.99th=[ 313] 00:39:20.779 bw ( KiB/s): min=66560, max=145408, per=6.29%, avg=91203.45, stdev=18228.27, samples=20 00:39:20.779 iops : min= 260, max= 568, avg=356.15, stdev=71.15, samples=20 00:39:20.780 lat (msec) : 50=0.88%, 100=3.97%, 250=93.55%, 500=1.60% 00:39:20.780 cpu : usr=0.14%, sys=2.16%, ctx=773, majf=0, minf=4097 00:39:20.780 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:39:20.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:39:20.780 issued rwts: total=3629,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:20.780 latency : target=0, window=0, percentile=100.00%, depth=64 00:39:20.780 job6: (groupid=0, jobs=1): err= 0: pid=78756: Wed Apr 17 08:35:51 2024 00:39:20.780 read: IOPS=675, BW=169MiB/s (177MB/s)(1698MiB/10053msec) 00:39:20.780 slat (usec): min=16, max=48440, avg=1366.12, stdev=4837.04 00:39:20.780 clat (msec): min=5, max=319, avg=93.16, stdev=32.74 00:39:20.780 lat (msec): min=5, max=319, avg=94.53, stdev=33.16 00:39:20.780 clat percentiles (msec): 00:39:20.780 | 1.00th=[ 28], 5.00th=[ 41], 10.00th=[ 56], 20.00th=[ 71], 00:39:20.780 | 30.00th=[ 78], 40.00th=[ 85], 50.00th=[ 91], 60.00th=[ 102], 00:39:20.780 | 70.00th=[ 109], 80.00th=[ 117], 90.00th=[ 127], 95.00th=[ 138], 00:39:20.780 | 99.00th=[ 180], 99.50th=[ 292], 99.90th=[ 305], 99.95th=[ 321], 00:39:20.780 | 99.99th=[ 321] 00:39:20.780 bw ( KiB/s): min=127488, max=276992, per=11.87%, avg=172172.65, stdev=41075.67, samples=20 00:39:20.780 iops : min= 498, max= 1082, avg=672.45, stdev=160.49, samples=20 00:39:20.780 lat (msec) : 10=0.09%, 20=0.35%, 50=7.86%, 100=50.66%, 250=40.48% 00:39:20.780 lat (msec) : 500=0.56% 00:39:20.780 cpu : usr=0.29%, sys=3.62%, ctx=1817, majf=0, minf=4097 00:39:20.780 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:39:20.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:39:20.780 issued rwts: total=6793,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:20.780 latency : target=0, window=0, percentile=100.00%, depth=64 00:39:20.780 job7: (groupid=0, jobs=1): err= 0: pid=78757: Wed Apr 17 08:35:51 2024 00:39:20.780 read: IOPS=362, BW=90.6MiB/s (95.0MB/s)(917MiB/10126msec) 00:39:20.780 slat (usec): min=15, max=119894, avg=2661.54, stdev=9663.66 00:39:20.780 clat (msec): min=35, max=308, avg=173.64, stdev=41.39 00:39:20.780 lat (msec): min=35, max=320, avg=176.30, stdev=42.83 00:39:20.780 clat percentiles (msec): 00:39:20.780 | 1.00th=[ 59], 5.00th=[ 90], 10.00th=[ 118], 20.00th=[ 140], 00:39:20.780 | 30.00th=[ 159], 40.00th=[ 169], 50.00th=[ 180], 60.00th=[ 190], 00:39:20.780 | 70.00th=[ 199], 80.00th=[ 209], 90.00th=[ 218], 95.00th=[ 226], 00:39:20.780 | 99.00th=[ 259], 99.50th=[ 279], 99.90th=[ 288], 99.95th=[ 305], 00:39:20.780 | 99.99th=[ 309] 00:39:20.780 bw ( KiB/s): min=66938, max=164023, per=6.36%, avg=92193.55, stdev=21197.87, samples=20 00:39:20.780 iops : min= 261, max= 640, avg=360.00, stdev=82.72, samples=20 00:39:20.780 lat (msec) : 50=0.57%, 100=5.97%, 250=91.19%, 500=2.26% 00:39:20.780 cpu : usr=0.17%, sys=2.03%, ctx=893, majf=0, minf=4097 00:39:20.780 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:39:20.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:39:20.780 issued rwts: total=3668,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:20.780 latency : target=0, window=0, percentile=100.00%, depth=64 00:39:20.780 job8: (groupid=0, jobs=1): err= 0: pid=78758: Wed Apr 17 08:35:51 2024 00:39:20.780 read: IOPS=516, BW=129MiB/s (135MB/s)(1302MiB/10080msec) 00:39:20.780 slat (usec): min=15, max=75664, avg=1893.65, stdev=6736.77 00:39:20.780 clat (msec): min=31, max=283, avg=121.66, stdev=32.24 00:39:20.780 lat (msec): min=32, max=287, avg=123.56, stdev=33.03 00:39:20.780 clat percentiles (msec): 00:39:20.780 | 1.00th=[ 55], 5.00th=[ 75], 10.00th=[ 85], 20.00th=[ 100], 00:39:20.780 | 30.00th=[ 107], 40.00th=[ 113], 50.00th=[ 120], 60.00th=[ 125], 00:39:20.780 | 70.00th=[ 132], 80.00th=[ 142], 90.00th=[ 163], 95.00th=[ 178], 00:39:20.780 | 99.00th=[ 226], 99.50th=[ 259], 99.90th=[ 284], 99.95th=[ 284], 00:39:20.780 | 99.99th=[ 284] 00:39:20.780 bw ( KiB/s): min=73728, max=205824, per=9.07%, avg=131562.95, stdev=25807.00, samples=20 00:39:20.780 iops : min= 288, max= 804, avg=513.65, stdev=100.79, samples=20 00:39:20.780 lat (msec) : 50=0.73%, 100=19.67%, 250=78.91%, 500=0.69% 00:39:20.780 cpu : usr=0.24%, sys=2.86%, ctx=1077, majf=0, minf=4097 00:39:20.780 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:39:20.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:39:20.780 issued rwts: total=5207,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:20.780 latency : target=0, window=0, percentile=100.00%, depth=64 00:39:20.780 job9: (groupid=0, jobs=1): err= 0: pid=78759: Wed Apr 17 08:35:51 2024 00:39:20.780 read: IOPS=595, BW=149MiB/s (156MB/s)(1500MiB/10073msec) 00:39:20.780 slat (usec): min=15, max=83973, avg=1610.51, stdev=5895.66 00:39:20.780 clat (msec): min=2, max=235, avg=105.63, stdev=33.14 00:39:20.780 lat (msec): min=2, max=259, avg=107.24, stdev=33.84 00:39:20.780 clat percentiles (msec): 00:39:20.780 | 1.00th=[ 13], 5.00th=[ 64], 10.00th=[ 73], 20.00th=[ 82], 00:39:20.780 | 30.00th=[ 89], 40.00th=[ 95], 50.00th=[ 104], 60.00th=[ 113], 00:39:20.780 | 70.00th=[ 122], 80.00th=[ 130], 90.00th=[ 142], 95.00th=[ 165], 00:39:20.780 | 99.00th=[ 205], 99.50th=[ 218], 99.90th=[ 226], 99.95th=[ 230], 00:39:20.780 | 99.99th=[ 236] 00:39:20.780 bw ( KiB/s): min=82267, max=211968, per=10.48%, avg=151918.50, stdev=36115.37, samples=20 00:39:20.780 iops : min= 321, max= 828, avg=593.20, stdev=141.17, samples=20 00:39:20.780 lat (msec) : 4=0.35%, 10=0.50%, 20=1.07%, 50=1.18%, 100=42.59% 00:39:20.780 lat (msec) : 250=54.31% 00:39:20.780 cpu : usr=0.24%, sys=3.36%, ctx=1281, majf=0, minf=4097 00:39:20.780 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:39:20.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:39:20.780 issued rwts: total=6001,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:20.780 latency : target=0, window=0, percentile=100.00%, depth=64 00:39:20.780 job10: (groupid=0, jobs=1): err= 0: pid=78760: Wed Apr 17 08:35:51 2024 00:39:20.780 read: IOPS=379, BW=94.8MiB/s (99.4MB/s)(961MiB/10129msec) 00:39:20.780 slat (usec): min=14, max=104904, avg=2496.05, stdev=9033.22 00:39:20.780 clat (msec): min=32, max=319, avg=165.77, stdev=48.95 00:39:20.780 lat (msec): min=34, max=319, avg=168.27, stdev=50.33 00:39:20.780 clat percentiles (msec): 00:39:20.780 | 1.00th=[ 55], 5.00th=[ 70], 10.00th=[ 81], 20.00th=[ 125], 00:39:20.780 | 30.00th=[ 148], 40.00th=[ 161], 50.00th=[ 174], 60.00th=[ 188], 00:39:20.780 | 70.00th=[ 201], 80.00th=[ 209], 90.00th=[ 218], 95.00th=[ 228], 00:39:20.780 | 99.00th=[ 255], 99.50th=[ 268], 99.90th=[ 288], 99.95th=[ 321], 00:39:20.780 | 99.99th=[ 321] 00:39:20.780 bw ( KiB/s): min=69632, max=184832, per=6.67%, avg=96703.55, stdev=31089.42, samples=20 00:39:20.780 iops : min= 272, max= 722, avg=377.60, stdev=121.47, samples=20 00:39:20.780 lat (msec) : 50=0.55%, 100=12.65%, 250=85.61%, 500=1.20% 00:39:20.780 cpu : usr=0.20%, sys=2.21%, ctx=885, majf=0, minf=4097 00:39:20.780 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:39:20.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:39:20.780 issued rwts: total=3842,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:20.780 latency : target=0, window=0, percentile=100.00%, depth=64 00:39:20.780 00:39:20.780 Run status group 0 (all jobs): 00:39:20.780 READ: bw=1416MiB/s (1485MB/s), 89.6MiB/s-203MiB/s (93.9MB/s-213MB/s), io=14.0GiB (15.0GB), run=10012-10133msec 00:39:20.780 00:39:20.780 Disk stats (read/write): 00:39:20.780 nvme0n1: ios=7751/0, merge=0/0, ticks=1234437/0, in_queue=1234437, util=97.95% 00:39:20.780 nvme10n1: ios=7784/0, merge=0/0, ticks=1247639/0, in_queue=1247639, util=98.10% 00:39:20.780 nvme1n1: ios=13514/0, merge=0/0, ticks=1211120/0, in_queue=1211120, util=97.41% 00:39:20.780 nvme2n1: ios=10444/0, merge=0/0, ticks=1210506/0, in_queue=1210506, util=97.45% 00:39:20.780 nvme3n1: ios=15672/0, merge=0/0, ticks=1209861/0, in_queue=1209861, util=97.58% 00:39:20.780 nvme4n1: ios=7172/0, merge=0/0, ticks=1239846/0, in_queue=1239846, util=98.22% 00:39:20.780 nvme5n1: ios=13210/0, merge=0/0, ticks=1211698/0, in_queue=1211698, util=97.61% 00:39:20.780 nvme6n1: ios=7265/0, merge=0/0, ticks=1245526/0, in_queue=1245526, util=98.17% 00:39:20.780 nvme7n1: ios=10150/0, merge=0/0, ticks=1216389/0, in_queue=1216389, util=98.39% 00:39:20.780 nvme8n1: ios=11741/0, merge=0/0, ticks=1212018/0, in_queue=1212018, util=98.32% 00:39:20.780 nvme9n1: ios=7605/0, merge=0/0, ticks=1243075/0, in_queue=1243075, util=98.31% 00:39:20.780 08:35:52 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:39:20.780 [global] 00:39:20.780 thread=1 00:39:20.780 invalidate=1 00:39:20.780 rw=randwrite 00:39:20.780 time_based=1 00:39:20.780 runtime=10 00:39:20.780 ioengine=libaio 00:39:20.780 direct=1 00:39:20.780 bs=262144 00:39:20.780 iodepth=64 00:39:20.780 norandommap=1 00:39:20.780 numjobs=1 00:39:20.780 00:39:20.780 [job0] 00:39:20.780 filename=/dev/nvme0n1 00:39:20.780 [job1] 00:39:20.780 filename=/dev/nvme10n1 00:39:20.780 [job2] 00:39:20.780 filename=/dev/nvme1n1 00:39:20.780 [job3] 00:39:20.780 filename=/dev/nvme2n1 00:39:20.780 [job4] 00:39:20.780 filename=/dev/nvme3n1 00:39:20.780 [job5] 00:39:20.780 filename=/dev/nvme4n1 00:39:20.780 [job6] 00:39:20.780 filename=/dev/nvme5n1 00:39:20.780 [job7] 00:39:20.780 filename=/dev/nvme6n1 00:39:20.780 [job8] 00:39:20.780 filename=/dev/nvme7n1 00:39:20.780 [job9] 00:39:20.780 filename=/dev/nvme8n1 00:39:20.780 [job10] 00:39:20.780 filename=/dev/nvme9n1 00:39:20.780 Could not set queue depth (nvme0n1) 00:39:20.781 Could not set queue depth (nvme10n1) 00:39:20.781 Could not set queue depth (nvme1n1) 00:39:20.781 Could not set queue depth (nvme2n1) 00:39:20.781 Could not set queue depth (nvme3n1) 00:39:20.781 Could not set queue depth (nvme4n1) 00:39:20.781 Could not set queue depth (nvme5n1) 00:39:20.781 Could not set queue depth (nvme6n1) 00:39:20.781 Could not set queue depth (nvme7n1) 00:39:20.781 Could not set queue depth (nvme8n1) 00:39:20.781 Could not set queue depth (nvme9n1) 00:39:20.781 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:39:20.781 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:39:20.781 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:39:20.781 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:39:20.781 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:39:20.781 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:39:20.781 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:39:20.781 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:39:20.781 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:39:20.781 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:39:20.781 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:39:20.781 fio-3.35 00:39:20.781 Starting 11 threads 00:39:30.871 00:39:30.871 job0: (groupid=0, jobs=1): err= 0: pid=78955: Wed Apr 17 08:36:02 2024 00:39:30.871 write: IOPS=444, BW=111MiB/s (117MB/s)(1126MiB/10121msec); 0 zone resets 00:39:30.871 slat (usec): min=20, max=18961, avg=2109.27, stdev=3888.84 00:39:30.871 clat (msec): min=6, max=265, avg=141.61, stdev=35.33 00:39:30.871 lat (msec): min=6, max=265, avg=143.72, stdev=35.76 00:39:30.871 clat percentiles (msec): 00:39:30.871 | 1.00th=[ 29], 5.00th=[ 57], 10.00th=[ 81], 20.00th=[ 138], 00:39:30.871 | 30.00th=[ 144], 40.00th=[ 146], 50.00th=[ 148], 60.00th=[ 150], 00:39:30.871 | 70.00th=[ 155], 80.00th=[ 157], 90.00th=[ 174], 95.00th=[ 192], 00:39:30.871 | 99.00th=[ 215], 99.50th=[ 228], 99.90th=[ 257], 99.95th=[ 257], 00:39:30.871 | 99.99th=[ 266] 00:39:30.871 bw ( KiB/s): min=83968, max=220672, per=8.80%, avg=113631.10, stdev=26369.04, samples=20 00:39:30.871 iops : min= 328, max= 862, avg=443.85, stdev=103.01, samples=20 00:39:30.871 lat (msec) : 10=0.24%, 20=0.27%, 50=1.47%, 100=9.55%, 250=88.34% 00:39:30.871 lat (msec) : 500=0.13% 00:39:30.871 cpu : usr=1.21%, sys=1.64%, ctx=3227, majf=0, minf=1 00:39:30.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:39:30.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:30.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:39:30.871 issued rwts: total=0,4503,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:30.871 latency : target=0, window=0, percentile=100.00%, depth=64 00:39:30.871 job1: (groupid=0, jobs=1): err= 0: pid=78956: Wed Apr 17 08:36:02 2024 00:39:30.871 write: IOPS=545, BW=136MiB/s (143MB/s)(1378MiB/10099msec); 0 zone resets 00:39:30.871 slat (usec): min=23, max=29532, avg=1809.98, stdev=3125.67 00:39:30.871 clat (msec): min=26, max=193, avg=115.40, stdev=21.13 00:39:30.871 lat (msec): min=27, max=193, avg=117.21, stdev=21.28 00:39:30.871 clat percentiles (msec): 00:39:30.871 | 1.00th=[ 91], 5.00th=[ 95], 10.00th=[ 97], 20.00th=[ 100], 00:39:30.871 | 30.00th=[ 102], 40.00th=[ 104], 50.00th=[ 106], 60.00th=[ 109], 00:39:30.871 | 70.00th=[ 134], 80.00th=[ 140], 90.00th=[ 146], 95.00th=[ 150], 00:39:30.871 | 99.00th=[ 167], 99.50th=[ 178], 99.90th=[ 184], 99.95th=[ 188], 00:39:30.871 | 99.99th=[ 194] 00:39:30.871 bw ( KiB/s): min=110371, max=162816, per=10.80%, avg=139467.25, stdev=20601.81, samples=20 00:39:30.871 iops : min= 431, max= 636, avg=544.75, stdev=80.45, samples=20 00:39:30.871 lat (msec) : 50=0.36%, 100=23.08%, 250=76.56% 00:39:30.871 cpu : usr=2.15%, sys=1.96%, ctx=6939, majf=0, minf=1 00:39:30.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:39:30.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:30.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:39:30.871 issued rwts: total=0,5512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:30.871 latency : target=0, window=0, percentile=100.00%, depth=64 00:39:30.871 job2: (groupid=0, jobs=1): err= 0: pid=78957: Wed Apr 17 08:36:02 2024 00:39:30.871 write: IOPS=559, BW=140MiB/s (147MB/s)(1412MiB/10096msec); 0 zone resets 00:39:30.871 slat (usec): min=17, max=32720, avg=1732.57, stdev=3077.09 00:39:30.871 clat (msec): min=6, max=189, avg=112.61, stdev=23.47 00:39:30.871 lat (msec): min=6, max=189, avg=114.34, stdev=23.70 00:39:30.871 clat percentiles (msec): 00:39:30.871 | 1.00th=[ 32], 5.00th=[ 93], 10.00th=[ 96], 20.00th=[ 99], 00:39:30.871 | 30.00th=[ 102], 40.00th=[ 103], 50.00th=[ 105], 60.00th=[ 108], 00:39:30.871 | 70.00th=[ 130], 80.00th=[ 138], 90.00th=[ 144], 95.00th=[ 146], 00:39:30.871 | 99.00th=[ 167], 99.50th=[ 178], 99.90th=[ 184], 99.95th=[ 184], 00:39:30.871 | 99.99th=[ 190] 00:39:30.871 bw ( KiB/s): min=110592, max=176128, per=11.07%, avg=142969.70, stdev=22541.84, samples=20 00:39:30.872 iops : min= 432, max= 688, avg=558.40, stdev=88.00, samples=20 00:39:30.872 lat (msec) : 10=0.04%, 20=0.44%, 50=1.42%, 100=23.63%, 250=74.47% 00:39:30.872 cpu : usr=1.88%, sys=1.62%, ctx=7096, majf=0, minf=1 00:39:30.872 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:39:30.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:30.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:39:30.872 issued rwts: total=0,5649,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:30.872 latency : target=0, window=0, percentile=100.00%, depth=64 00:39:30.872 job3: (groupid=0, jobs=1): err= 0: pid=78961: Wed Apr 17 08:36:02 2024 00:39:30.872 write: IOPS=354, BW=88.7MiB/s (93.0MB/s)(901MiB/10149msec); 0 zone resets 00:39:30.872 slat (usec): min=25, max=36068, avg=2750.39, stdev=4988.01 00:39:30.872 clat (msec): min=38, max=331, avg=177.50, stdev=36.71 00:39:30.872 lat (msec): min=38, max=331, avg=180.25, stdev=37.02 00:39:30.872 clat percentiles (msec): 00:39:30.872 | 1.00th=[ 57], 5.00th=[ 101], 10.00th=[ 126], 20.00th=[ 153], 00:39:30.872 | 30.00th=[ 159], 40.00th=[ 182], 50.00th=[ 190], 60.00th=[ 194], 00:39:30.872 | 70.00th=[ 201], 80.00th=[ 205], 90.00th=[ 209], 95.00th=[ 213], 00:39:30.872 | 99.00th=[ 251], 99.50th=[ 275], 99.90th=[ 321], 99.95th=[ 334], 00:39:30.872 | 99.99th=[ 334] 00:39:30.872 bw ( KiB/s): min=73728, max=154933, per=7.02%, avg=90597.40, stdev=18454.29, samples=20 00:39:30.872 iops : min= 288, max= 605, avg=353.85, stdev=72.07, samples=20 00:39:30.872 lat (msec) : 50=0.28%, 100=4.72%, 250=93.95%, 500=1.05% 00:39:30.872 cpu : usr=1.00%, sys=1.32%, ctx=2913, majf=0, minf=1 00:39:30.872 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:39:30.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:30.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:39:30.872 issued rwts: total=0,3602,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:30.872 latency : target=0, window=0, percentile=100.00%, depth=64 00:39:30.872 job4: (groupid=0, jobs=1): err= 0: pid=78964: Wed Apr 17 08:36:02 2024 00:39:30.872 write: IOPS=388, BW=97.2MiB/s (102MB/s)(985MiB/10128msec); 0 zone resets 00:39:30.872 slat (usec): min=29, max=39106, avg=2535.22, stdev=4428.83 00:39:30.872 clat (msec): min=4, max=268, avg=161.94, stdev=29.39 00:39:30.872 lat (msec): min=4, max=268, avg=164.47, stdev=29.60 00:39:30.872 clat percentiles (msec): 00:39:30.872 | 1.00th=[ 110], 5.00th=[ 138], 10.00th=[ 140], 20.00th=[ 144], 00:39:30.872 | 30.00th=[ 146], 40.00th=[ 148], 50.00th=[ 150], 60.00th=[ 155], 00:39:30.872 | 70.00th=[ 159], 80.00th=[ 194], 90.00th=[ 209], 95.00th=[ 211], 00:39:30.872 | 99.00th=[ 251], 99.50th=[ 253], 99.90th=[ 262], 99.95th=[ 271], 00:39:30.872 | 99.99th=[ 271] 00:39:30.872 bw ( KiB/s): min=70656, max=112640, per=7.68%, avg=99225.60, stdev=14362.54, samples=20 00:39:30.872 iops : min= 276, max= 440, avg=387.60, stdev=56.10, samples=20 00:39:30.872 lat (msec) : 10=0.13%, 20=0.13%, 50=0.08%, 100=0.61%, 250=97.92% 00:39:30.872 lat (msec) : 500=1.14% 00:39:30.872 cpu : usr=0.90%, sys=1.55%, ctx=5599, majf=0, minf=1 00:39:30.872 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:39:30.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:30.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:39:30.872 issued rwts: total=0,3939,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:30.872 latency : target=0, window=0, percentile=100.00%, depth=64 00:39:30.872 job5: (groupid=0, jobs=1): err= 0: pid=78971: Wed Apr 17 08:36:02 2024 00:39:30.872 write: IOPS=321, BW=80.3MiB/s (84.2MB/s)(816MiB/10156msec); 0 zone resets 00:39:30.872 slat (usec): min=18, max=54099, avg=2934.04, stdev=5317.35 00:39:30.872 clat (msec): min=31, max=331, avg=196.12, stdev=23.83 00:39:30.872 lat (msec): min=31, max=331, avg=199.06, stdev=23.74 00:39:30.872 clat percentiles (msec): 00:39:30.872 | 1.00th=[ 111], 5.00th=[ 171], 10.00th=[ 178], 20.00th=[ 184], 00:39:30.872 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 194], 60.00th=[ 199], 00:39:30.872 | 70.00th=[ 205], 80.00th=[ 207], 90.00th=[ 213], 95.00th=[ 236], 00:39:30.872 | 99.00th=[ 279], 99.50th=[ 288], 99.90th=[ 321], 99.95th=[ 330], 00:39:30.872 | 99.99th=[ 330] 00:39:30.872 bw ( KiB/s): min=64000, max=88064, per=6.34%, avg=81937.00, stdev=5818.42, samples=20 00:39:30.872 iops : min= 250, max= 344, avg=320.05, stdev=22.72, samples=20 00:39:30.872 lat (msec) : 50=0.28%, 100=0.61%, 250=96.20%, 500=2.91% 00:39:30.872 cpu : usr=0.84%, sys=1.26%, ctx=2917, majf=0, minf=1 00:39:30.872 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:39:30.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:30.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:39:30.872 issued rwts: total=0,3264,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:30.872 latency : target=0, window=0, percentile=100.00%, depth=64 00:39:30.872 job6: (groupid=0, jobs=1): err= 0: pid=78972: Wed Apr 17 08:36:02 2024 00:39:30.872 write: IOPS=828, BW=207MiB/s (217MB/s)(2092MiB/10099msec); 0 zone resets 00:39:30.872 slat (usec): min=27, max=15033, avg=1190.47, stdev=2114.72 00:39:30.872 clat (msec): min=5, max=195, avg=76.02, stdev=26.16 00:39:30.872 lat (msec): min=5, max=196, avg=77.21, stdev=26.53 00:39:30.872 clat percentiles (msec): 00:39:30.872 | 1.00th=[ 47], 5.00th=[ 49], 10.00th=[ 50], 20.00th=[ 52], 00:39:30.872 | 30.00th=[ 53], 40.00th=[ 54], 50.00th=[ 56], 60.00th=[ 97], 00:39:30.872 | 70.00th=[ 101], 80.00th=[ 104], 90.00th=[ 107], 95.00th=[ 109], 00:39:30.872 | 99.00th=[ 117], 99.50th=[ 131], 99.90th=[ 184], 99.95th=[ 190], 00:39:30.872 | 99.99th=[ 197] 00:39:30.872 bw ( KiB/s): min=151552, max=323584, per=16.46%, avg=212592.00, stdev=70284.59, samples=20 00:39:30.872 iops : min= 592, max= 1264, avg=830.40, stdev=274.58, samples=20 00:39:30.872 lat (msec) : 10=0.05%, 50=11.28%, 100=57.48%, 250=31.19% 00:39:30.872 cpu : usr=2.88%, sys=3.34%, ctx=11250, majf=0, minf=1 00:39:30.872 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:39:30.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:30.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:39:30.872 issued rwts: total=0,8368,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:30.872 latency : target=0, window=0, percentile=100.00%, depth=64 00:39:30.872 job7: (groupid=0, jobs=1): err= 0: pid=78973: Wed Apr 17 08:36:02 2024 00:39:30.872 write: IOPS=485, BW=121MiB/s (127MB/s)(1225MiB/10095msec); 0 zone resets 00:39:30.872 slat (usec): min=20, max=67170, avg=2004.52, stdev=3807.44 00:39:30.872 clat (msec): min=7, max=270, avg=129.78, stdev=43.24 00:39:30.872 lat (msec): min=7, max=283, avg=131.78, stdev=43.81 00:39:30.872 clat percentiles (msec): 00:39:30.872 | 1.00th=[ 39], 5.00th=[ 94], 10.00th=[ 97], 20.00th=[ 101], 00:39:30.872 | 30.00th=[ 104], 40.00th=[ 105], 50.00th=[ 107], 60.00th=[ 111], 00:39:30.872 | 70.00th=[ 150], 80.00th=[ 159], 90.00th=[ 205], 95.00th=[ 209], 00:39:30.872 | 99.00th=[ 245], 99.50th=[ 251], 99.90th=[ 255], 99.95th=[ 271], 00:39:30.872 | 99.99th=[ 271] 00:39:30.872 bw ( KiB/s): min=60806, max=182784, per=9.59%, avg=123819.70, stdev=38170.40, samples=20 00:39:30.872 iops : min= 237, max= 714, avg=483.60, stdev=149.11, samples=20 00:39:30.872 lat (msec) : 10=0.06%, 20=0.27%, 50=1.08%, 100=16.79%, 250=81.27% 00:39:30.872 lat (msec) : 500=0.53% 00:39:30.872 cpu : usr=1.48%, sys=1.26%, ctx=6505, majf=0, minf=1 00:39:30.872 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:39:30.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:30.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:39:30.872 issued rwts: total=0,4901,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:30.872 latency : target=0, window=0, percentile=100.00%, depth=64 00:39:30.872 job8: (groupid=0, jobs=1): err= 0: pid=78974: Wed Apr 17 08:36:02 2024 00:39:30.872 write: IOPS=453, BW=113MiB/s (119MB/s)(1149MiB/10120msec); 0 zone resets 00:39:30.872 slat (usec): min=28, max=58141, avg=2140.86, stdev=3784.08 00:39:30.872 clat (msec): min=4, max=261, avg=138.77, stdev=20.16 00:39:30.872 lat (msec): min=4, max=261, avg=140.92, stdev=20.16 00:39:30.872 clat percentiles (msec): 00:39:30.872 | 1.00th=[ 92], 5.00th=[ 99], 10.00th=[ 105], 20.00th=[ 133], 00:39:30.872 | 30.00th=[ 138], 40.00th=[ 140], 50.00th=[ 144], 60.00th=[ 146], 00:39:30.872 | 70.00th=[ 148], 80.00th=[ 150], 90.00th=[ 155], 95.00th=[ 161], 00:39:30.872 | 99.00th=[ 186], 99.50th=[ 205], 99.90th=[ 253], 99.95th=[ 253], 00:39:30.872 | 99.99th=[ 262] 00:39:30.872 bw ( KiB/s): min=98816, max=157184, per=8.98%, avg=116004.70, stdev=13386.85, samples=20 00:39:30.872 iops : min= 386, max= 614, avg=453.10, stdev=52.31, samples=20 00:39:30.872 lat (msec) : 10=0.11%, 50=0.17%, 100=5.70%, 250=93.88%, 500=0.13% 00:39:30.872 cpu : usr=1.38%, sys=1.75%, ctx=5615, majf=0, minf=1 00:39:30.872 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:39:30.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:30.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:39:30.872 issued rwts: total=0,4594,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:30.872 latency : target=0, window=0, percentile=100.00%, depth=64 00:39:30.872 job9: (groupid=0, jobs=1): err= 0: pid=78975: Wed Apr 17 08:36:02 2024 00:39:30.872 write: IOPS=334, BW=83.7MiB/s (87.8MB/s)(850MiB/10149msec); 0 zone resets 00:39:30.872 slat (usec): min=20, max=33117, avg=2900.70, stdev=5094.68 00:39:30.872 clat (msec): min=8, max=335, avg=188.17, stdev=26.68 00:39:30.872 lat (msec): min=8, max=335, avg=191.07, stdev=26.58 00:39:30.872 clat percentiles (msec): 00:39:30.872 | 1.00th=[ 110], 5.00th=[ 148], 10.00th=[ 157], 20.00th=[ 176], 00:39:30.872 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 192], 00:39:30.872 | 70.00th=[ 197], 80.00th=[ 205], 90.00th=[ 211], 95.00th=[ 228], 00:39:30.872 | 99.00th=[ 262], 99.50th=[ 288], 99.90th=[ 321], 99.95th=[ 338], 00:39:30.872 | 99.99th=[ 338] 00:39:30.872 bw ( KiB/s): min=69632, max=108544, per=6.61%, avg=85350.90, stdev=7936.49, samples=20 00:39:30.872 iops : min= 272, max= 424, avg=333.35, stdev=31.02, samples=20 00:39:30.872 lat (msec) : 10=0.12%, 50=0.12%, 100=0.71%, 250=96.79%, 500=2.27% 00:39:30.872 cpu : usr=1.05%, sys=1.13%, ctx=4320, majf=0, minf=1 00:39:30.872 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:39:30.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:30.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:39:30.872 issued rwts: total=0,3398,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:30.872 latency : target=0, window=0, percentile=100.00%, depth=64 00:39:30.872 job10: (groupid=0, jobs=1): err= 0: pid=78976: Wed Apr 17 08:36:02 2024 00:39:30.872 write: IOPS=345, BW=86.3MiB/s (90.4MB/s)(876MiB/10150msec); 0 zone resets 00:39:30.872 slat (usec): min=21, max=115210, avg=2700.40, stdev=5474.69 00:39:30.872 clat (msec): min=12, max=328, avg=182.72, stdev=36.85 00:39:30.873 lat (msec): min=12, max=344, avg=185.42, stdev=37.14 00:39:30.873 clat percentiles (msec): 00:39:30.873 | 1.00th=[ 33], 5.00th=[ 140], 10.00th=[ 148], 20.00th=[ 157], 00:39:30.873 | 30.00th=[ 169], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 194], 00:39:30.873 | 70.00th=[ 199], 80.00th=[ 203], 90.00th=[ 218], 95.00th=[ 234], 00:39:30.873 | 99.00th=[ 271], 99.50th=[ 305], 99.90th=[ 330], 99.95th=[ 330], 00:39:30.873 | 99.99th=[ 330] 00:39:30.873 bw ( KiB/s): min=53248, max=113664, per=6.82%, avg=88021.10, stdev=13503.89, samples=20 00:39:30.873 iops : min= 208, max= 444, avg=343.80, stdev=52.75, samples=20 00:39:30.873 lat (msec) : 20=0.17%, 50=1.54%, 100=1.37%, 250=95.17%, 500=1.74% 00:39:30.873 cpu : usr=0.91%, sys=1.31%, ctx=3560, majf=0, minf=1 00:39:30.873 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:39:30.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:30.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:39:30.873 issued rwts: total=0,3502,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:30.873 latency : target=0, window=0, percentile=100.00%, depth=64 00:39:30.873 00:39:30.873 Run status group 0 (all jobs): 00:39:30.873 WRITE: bw=1261MiB/s (1322MB/s), 80.3MiB/s-207MiB/s (84.2MB/s-217MB/s), io=12.5GiB (13.4GB), run=10095-10156msec 00:39:30.873 00:39:30.873 Disk stats (read/write): 00:39:30.873 nvme0n1: ios=50/8908, merge=0/0, ticks=56/1218910, in_queue=1218966, util=98.38% 00:39:30.873 nvme10n1: ios=49/10924, merge=0/0, ticks=48/1219010, in_queue=1219058, util=98.28% 00:39:30.873 nvme1n1: ios=49/11193, merge=0/0, ticks=35/1219920, in_queue=1219955, util=98.34% 00:39:30.873 nvme2n1: ios=49/7091, merge=0/0, ticks=43/1213435, in_queue=1213478, util=98.45% 00:39:30.873 nvme3n1: ios=49/7785, merge=0/0, ticks=51/1218815, in_queue=1218866, util=98.73% 00:39:30.873 nvme4n1: ios=36/6417, merge=0/0, ticks=37/1217086, in_queue=1217123, util=98.65% 00:39:30.873 nvme5n1: ios=36/16639, merge=0/0, ticks=35/1218798, in_queue=1218833, util=98.70% 00:39:30.873 nvme6n1: ios=13/9702, merge=0/0, ticks=31/1220001, in_queue=1220032, util=98.55% 00:39:30.873 nvme7n1: ios=0/9084, merge=0/0, ticks=0/1218380, in_queue=1218380, util=98.75% 00:39:30.873 nvme8n1: ios=0/6697, merge=0/0, ticks=0/1216888, in_queue=1216888, util=98.86% 00:39:30.873 nvme9n1: ios=0/6891, merge=0/0, ticks=0/1217207, in_queue=1217207, util=98.79% 00:39:30.873 08:36:02 -- target/multiconnection.sh@36 -- # sync 00:39:30.873 08:36:02 -- target/multiconnection.sh@37 -- # seq 1 11 00:39:30.873 08:36:02 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:39:30.873 08:36:02 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:30.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:30.873 08:36:02 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:39:30.873 08:36:02 -- common/autotest_common.sh@1198 -- # local i=0 00:39:30.873 08:36:02 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:39:30.873 08:36:02 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:39:30.873 08:36:02 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:39:30.873 08:36:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:39:30.873 08:36:02 -- common/autotest_common.sh@1210 -- # return 0 00:39:30.873 08:36:02 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:30.873 08:36:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:30.873 08:36:02 -- common/autotest_common.sh@10 -- # set +x 00:39:30.873 08:36:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:30.873 08:36:02 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:39:30.873 08:36:02 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:39:30.873 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:39:30.873 08:36:02 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:39:30.873 08:36:02 -- common/autotest_common.sh@1198 -- # local i=0 00:39:30.873 08:36:02 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:39:30.873 08:36:02 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:39:30.873 08:36:03 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:39:30.873 08:36:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:39:30.873 08:36:03 -- common/autotest_common.sh@1210 -- # return 0 00:39:30.873 08:36:03 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:39:30.873 08:36:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:30.873 08:36:03 -- common/autotest_common.sh@10 -- # set +x 00:39:30.873 08:36:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:30.873 08:36:03 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:39:30.873 08:36:03 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:39:30.873 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:39:30.873 08:36:03 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:39:30.873 08:36:03 -- common/autotest_common.sh@1198 -- # local i=0 00:39:30.873 08:36:03 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:39:30.873 08:36:03 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:39:30.873 08:36:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:39:30.873 08:36:03 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:39:30.873 08:36:03 -- common/autotest_common.sh@1210 -- # return 0 00:39:30.873 08:36:03 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:39:30.873 08:36:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:30.873 08:36:03 -- common/autotest_common.sh@10 -- # set +x 00:39:30.873 08:36:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:30.873 08:36:03 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:39:30.873 08:36:03 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:39:30.873 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:39:30.873 08:36:03 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:39:30.873 08:36:03 -- common/autotest_common.sh@1198 -- # local i=0 00:39:30.873 08:36:03 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:39:30.873 08:36:03 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:39:30.873 08:36:03 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:39:30.873 08:36:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:39:30.873 08:36:03 -- common/autotest_common.sh@1210 -- # return 0 00:39:30.873 08:36:03 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:39:30.873 08:36:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:30.873 08:36:03 -- common/autotest_common.sh@10 -- # set +x 00:39:30.873 08:36:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:30.873 08:36:03 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:39:30.873 08:36:03 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:39:30.873 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:39:30.873 08:36:03 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:39:30.873 08:36:03 -- common/autotest_common.sh@1198 -- # local i=0 00:39:30.873 08:36:03 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:39:30.873 08:36:03 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:39:30.873 08:36:03 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:39:30.873 08:36:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:39:30.873 08:36:03 -- common/autotest_common.sh@1210 -- # return 0 00:39:30.873 08:36:03 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:39:30.873 08:36:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:30.873 08:36:03 -- common/autotest_common.sh@10 -- # set +x 00:39:30.873 08:36:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:30.873 08:36:03 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:39:30.873 08:36:03 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:39:30.873 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:39:30.873 08:36:03 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:39:30.873 08:36:03 -- common/autotest_common.sh@1198 -- # local i=0 00:39:30.873 08:36:03 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:39:30.873 08:36:03 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:39:30.873 08:36:03 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:39:30.873 08:36:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:39:30.873 08:36:03 -- common/autotest_common.sh@1210 -- # return 0 00:39:30.873 08:36:03 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:39:30.873 08:36:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:30.873 08:36:03 -- common/autotest_common.sh@10 -- # set +x 00:39:30.873 08:36:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:30.873 08:36:03 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:39:30.873 08:36:03 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:39:30.873 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:39:30.873 08:36:03 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:39:30.873 08:36:03 -- common/autotest_common.sh@1198 -- # local i=0 00:39:30.873 08:36:03 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:39:30.873 08:36:03 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:39:30.873 08:36:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:39:30.873 08:36:03 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:39:30.873 08:36:03 -- common/autotest_common.sh@1210 -- # return 0 00:39:30.873 08:36:03 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:39:30.873 08:36:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:30.873 08:36:03 -- common/autotest_common.sh@10 -- # set +x 00:39:30.873 08:36:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:30.873 08:36:03 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:39:30.873 08:36:03 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:39:30.873 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:39:30.873 08:36:03 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:39:30.873 08:36:03 -- common/autotest_common.sh@1198 -- # local i=0 00:39:30.873 08:36:03 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:39:30.873 08:36:03 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:39:30.873 08:36:03 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:39:30.873 08:36:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:39:30.873 08:36:03 -- common/autotest_common.sh@1210 -- # return 0 00:39:30.873 08:36:03 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:39:30.873 08:36:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:30.873 08:36:03 -- common/autotest_common.sh@10 -- # set +x 00:39:30.873 08:36:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:30.873 08:36:03 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:39:30.873 08:36:03 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:39:30.873 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:39:30.874 08:36:03 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:39:30.874 08:36:03 -- common/autotest_common.sh@1198 -- # local i=0 00:39:30.874 08:36:03 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:39:30.874 08:36:03 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:39:30.874 08:36:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:39:30.874 08:36:03 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:39:30.874 08:36:03 -- common/autotest_common.sh@1210 -- # return 0 00:39:30.874 08:36:03 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:39:30.874 08:36:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:30.874 08:36:03 -- common/autotest_common.sh@10 -- # set +x 00:39:30.874 08:36:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:30.874 08:36:03 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:39:30.874 08:36:03 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:39:30.874 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:39:30.874 08:36:03 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:39:30.874 08:36:03 -- common/autotest_common.sh@1198 -- # local i=0 00:39:30.874 08:36:03 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:39:30.874 08:36:03 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:39:30.874 08:36:03 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:39:30.874 08:36:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:39:30.874 08:36:03 -- common/autotest_common.sh@1210 -- # return 0 00:39:30.874 08:36:03 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:39:30.874 08:36:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:30.874 08:36:03 -- common/autotest_common.sh@10 -- # set +x 00:39:30.874 08:36:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:30.874 08:36:03 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:39:30.874 08:36:03 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:39:30.874 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:39:30.874 08:36:03 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:39:30.874 08:36:03 -- common/autotest_common.sh@1198 -- # local i=0 00:39:30.874 08:36:03 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:39:30.874 08:36:03 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:39:30.874 08:36:03 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:39:30.874 08:36:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:39:30.874 08:36:03 -- common/autotest_common.sh@1210 -- # return 0 00:39:30.874 08:36:03 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:39:30.874 08:36:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:30.874 08:36:03 -- common/autotest_common.sh@10 -- # set +x 00:39:30.874 08:36:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:30.874 08:36:03 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:39:30.874 08:36:03 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:39:30.874 08:36:04 -- target/multiconnection.sh@47 -- # nvmftestfini 00:39:30.874 08:36:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:39:30.874 08:36:04 -- nvmf/common.sh@116 -- # sync 00:39:30.874 08:36:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:39:30.874 08:36:04 -- nvmf/common.sh@119 -- # set +e 00:39:30.874 08:36:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:39:30.874 08:36:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:39:30.874 rmmod nvme_tcp 00:39:30.874 rmmod nvme_fabrics 00:39:30.874 rmmod nvme_keyring 00:39:30.874 08:36:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:39:30.874 08:36:04 -- nvmf/common.sh@123 -- # set -e 00:39:30.874 08:36:04 -- nvmf/common.sh@124 -- # return 0 00:39:30.874 08:36:04 -- nvmf/common.sh@477 -- # '[' -n 78268 ']' 00:39:30.874 08:36:04 -- nvmf/common.sh@478 -- # killprocess 78268 00:39:30.874 08:36:04 -- common/autotest_common.sh@926 -- # '[' -z 78268 ']' 00:39:30.874 08:36:04 -- common/autotest_common.sh@930 -- # kill -0 78268 00:39:30.874 08:36:04 -- common/autotest_common.sh@931 -- # uname 00:39:30.874 08:36:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:39:30.874 08:36:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78268 00:39:30.874 killing process with pid 78268 00:39:30.874 08:36:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:39:30.874 08:36:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:39:30.874 08:36:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78268' 00:39:30.874 08:36:04 -- common/autotest_common.sh@945 -- # kill 78268 00:39:30.874 08:36:04 -- common/autotest_common.sh@950 -- # wait 78268 00:39:31.812 08:36:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:39:31.812 08:36:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:39:31.812 08:36:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:39:31.812 08:36:04 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:31.812 08:36:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:39:31.812 08:36:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:31.812 08:36:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:31.812 08:36:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:31.812 08:36:04 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:39:31.812 00:39:31.812 real 0m49.999s 00:39:31.812 user 2m54.549s 00:39:31.812 sys 0m21.393s 00:39:31.812 08:36:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:31.812 ************************************ 00:39:31.812 END TEST nvmf_multiconnection 00:39:31.812 ************************************ 00:39:31.812 08:36:04 -- common/autotest_common.sh@10 -- # set +x 00:39:31.812 08:36:04 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:39:31.812 08:36:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:39:31.812 08:36:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:39:31.812 08:36:04 -- common/autotest_common.sh@10 -- # set +x 00:39:31.812 ************************************ 00:39:31.812 START TEST nvmf_initiator_timeout 00:39:31.812 ************************************ 00:39:31.812 08:36:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:39:31.812 * Looking for test storage... 00:39:31.812 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:39:31.812 08:36:05 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:39:31.812 08:36:05 -- nvmf/common.sh@7 -- # uname -s 00:39:31.812 08:36:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:31.812 08:36:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:31.812 08:36:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:31.812 08:36:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:31.812 08:36:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:31.812 08:36:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:31.812 08:36:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:31.812 08:36:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:31.812 08:36:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:31.812 08:36:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:31.812 08:36:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:39:31.812 08:36:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:39:32.071 08:36:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:32.071 08:36:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:32.071 08:36:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:39:32.071 08:36:05 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:32.071 08:36:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:32.071 08:36:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:32.071 08:36:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:32.071 08:36:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:32.071 08:36:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:32.071 08:36:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:32.071 08:36:05 -- paths/export.sh@5 -- # export PATH 00:39:32.071 08:36:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:32.071 08:36:05 -- nvmf/common.sh@46 -- # : 0 00:39:32.071 08:36:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:39:32.071 08:36:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:39:32.071 08:36:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:39:32.071 08:36:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:32.071 08:36:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:32.071 08:36:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:39:32.071 08:36:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:39:32.071 08:36:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:39:32.071 08:36:05 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:32.071 08:36:05 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:32.071 08:36:05 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:39:32.071 08:36:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:39:32.071 08:36:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:32.071 08:36:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:39:32.071 08:36:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:39:32.071 08:36:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:39:32.071 08:36:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:32.071 08:36:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:32.071 08:36:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:32.071 08:36:05 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:39:32.071 08:36:05 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:39:32.071 08:36:05 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:39:32.071 08:36:05 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:39:32.071 08:36:05 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:39:32.071 08:36:05 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:39:32.071 08:36:05 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:32.071 08:36:05 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:32.071 08:36:05 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:39:32.071 08:36:05 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:39:32.071 08:36:05 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:39:32.071 08:36:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:39:32.071 08:36:05 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:39:32.071 08:36:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:32.071 08:36:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:39:32.071 08:36:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:39:32.071 08:36:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:39:32.071 08:36:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:39:32.071 08:36:05 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:39:32.071 08:36:05 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:39:32.071 Cannot find device "nvmf_tgt_br" 00:39:32.071 08:36:05 -- nvmf/common.sh@154 -- # true 00:39:32.071 08:36:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:39:32.071 Cannot find device "nvmf_tgt_br2" 00:39:32.071 08:36:05 -- nvmf/common.sh@155 -- # true 00:39:32.071 08:36:05 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:39:32.071 08:36:05 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:39:32.071 Cannot find device "nvmf_tgt_br" 00:39:32.071 08:36:05 -- nvmf/common.sh@157 -- # true 00:39:32.071 08:36:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:39:32.071 Cannot find device "nvmf_tgt_br2" 00:39:32.071 08:36:05 -- nvmf/common.sh@158 -- # true 00:39:32.071 08:36:05 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:39:32.071 08:36:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:39:32.071 08:36:05 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:32.071 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:32.071 08:36:05 -- nvmf/common.sh@161 -- # true 00:39:32.071 08:36:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:32.071 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:32.071 08:36:05 -- nvmf/common.sh@162 -- # true 00:39:32.071 08:36:05 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:39:32.071 08:36:05 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:39:32.071 08:36:05 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:39:32.071 08:36:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:39:32.071 08:36:05 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:39:32.071 08:36:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:39:32.330 08:36:05 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:39:32.330 08:36:05 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:39:32.330 08:36:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:39:32.330 08:36:05 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:39:32.330 08:36:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:39:32.330 08:36:05 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:39:32.330 08:36:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:39:32.330 08:36:05 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:39:32.330 08:36:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:39:32.330 08:36:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:39:32.330 08:36:05 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:39:32.330 08:36:05 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:39:32.330 08:36:05 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:39:32.330 08:36:05 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:39:32.330 08:36:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:39:32.330 08:36:05 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:39:32.330 08:36:05 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:39:32.330 08:36:05 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:39:32.330 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:32.330 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:39:32.330 00:39:32.330 --- 10.0.0.2 ping statistics --- 00:39:32.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:32.330 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:39:32.330 08:36:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:39:32.330 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:39:32.330 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:39:32.330 00:39:32.330 --- 10.0.0.3 ping statistics --- 00:39:32.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:32.330 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:39:32.330 08:36:05 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:39:32.330 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:32.330 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:39:32.330 00:39:32.330 --- 10.0.0.1 ping statistics --- 00:39:32.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:32.330 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:39:32.330 08:36:05 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:32.330 08:36:05 -- nvmf/common.sh@421 -- # return 0 00:39:32.330 08:36:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:39:32.330 08:36:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:32.330 08:36:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:39:32.330 08:36:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:39:32.330 08:36:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:32.330 08:36:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:39:32.330 08:36:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:39:32.330 08:36:05 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:39:32.330 08:36:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:39:32.330 08:36:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:39:32.330 08:36:05 -- common/autotest_common.sh@10 -- # set +x 00:39:32.330 08:36:05 -- nvmf/common.sh@469 -- # nvmfpid=79349 00:39:32.330 08:36:05 -- nvmf/common.sh@470 -- # waitforlisten 79349 00:39:32.330 08:36:05 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:39:32.330 08:36:05 -- common/autotest_common.sh@819 -- # '[' -z 79349 ']' 00:39:32.330 08:36:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:32.330 08:36:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:39:32.330 08:36:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:32.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:32.330 08:36:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:39:32.330 08:36:05 -- common/autotest_common.sh@10 -- # set +x 00:39:32.330 [2024-04-17 08:36:05.598681] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:39:32.330 [2024-04-17 08:36:05.598776] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:32.590 [2024-04-17 08:36:05.741722] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:32.590 [2024-04-17 08:36:05.847433] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:39:32.590 [2024-04-17 08:36:05.847567] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:32.590 [2024-04-17 08:36:05.847575] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:32.590 [2024-04-17 08:36:05.847581] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:32.590 [2024-04-17 08:36:05.847784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:39:32.590 [2024-04-17 08:36:05.847857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:39:32.590 [2024-04-17 08:36:05.848075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:32.590 [2024-04-17 08:36:05.848079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:39:33.158 08:36:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:39:33.158 08:36:06 -- common/autotest_common.sh@852 -- # return 0 00:39:33.158 08:36:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:39:33.158 08:36:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:39:33.158 08:36:06 -- common/autotest_common.sh@10 -- # set +x 00:39:33.417 08:36:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:33.417 08:36:06 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:39:33.417 08:36:06 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:33.417 08:36:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:33.417 08:36:06 -- common/autotest_common.sh@10 -- # set +x 00:39:33.417 Malloc0 00:39:33.417 08:36:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:33.417 08:36:06 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:39:33.417 08:36:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:33.417 08:36:06 -- common/autotest_common.sh@10 -- # set +x 00:39:33.417 Delay0 00:39:33.417 08:36:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:33.417 08:36:06 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:33.417 08:36:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:33.417 08:36:06 -- common/autotest_common.sh@10 -- # set +x 00:39:33.417 [2024-04-17 08:36:06.584378] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:33.417 08:36:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:33.417 08:36:06 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:33.417 08:36:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:33.417 08:36:06 -- common/autotest_common.sh@10 -- # set +x 00:39:33.417 08:36:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:33.417 08:36:06 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:33.417 08:36:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:33.417 08:36:06 -- common/autotest_common.sh@10 -- # set +x 00:39:33.417 08:36:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:33.417 08:36:06 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:33.417 08:36:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:33.417 08:36:06 -- common/autotest_common.sh@10 -- # set +x 00:39:33.417 [2024-04-17 08:36:06.624579] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:33.417 08:36:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:33.417 08:36:06 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:33.675 08:36:06 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:39:33.675 08:36:06 -- common/autotest_common.sh@1177 -- # local i=0 00:39:33.675 08:36:06 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:39:33.675 08:36:06 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:39:33.675 08:36:06 -- common/autotest_common.sh@1184 -- # sleep 2 00:39:35.580 08:36:08 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:39:35.580 08:36:08 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:39:35.580 08:36:08 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:39:35.580 08:36:08 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:39:35.580 08:36:08 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:39:35.580 08:36:08 -- common/autotest_common.sh@1187 -- # return 0 00:39:35.580 08:36:08 -- target/initiator_timeout.sh@35 -- # fio_pid=79431 00:39:35.580 08:36:08 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:39:35.580 08:36:08 -- target/initiator_timeout.sh@37 -- # sleep 3 00:39:35.580 [global] 00:39:35.580 thread=1 00:39:35.580 invalidate=1 00:39:35.580 rw=write 00:39:35.580 time_based=1 00:39:35.580 runtime=60 00:39:35.580 ioengine=libaio 00:39:35.580 direct=1 00:39:35.580 bs=4096 00:39:35.580 iodepth=1 00:39:35.580 norandommap=0 00:39:35.580 numjobs=1 00:39:35.580 00:39:35.580 verify_dump=1 00:39:35.580 verify_backlog=512 00:39:35.580 verify_state_save=0 00:39:35.580 do_verify=1 00:39:35.580 verify=crc32c-intel 00:39:35.580 [job0] 00:39:35.580 filename=/dev/nvme0n1 00:39:35.580 Could not set queue depth (nvme0n1) 00:39:35.839 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:35.839 fio-3.35 00:39:35.839 Starting 1 thread 00:39:39.128 08:36:11 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:39:39.128 08:36:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:39.128 08:36:11 -- common/autotest_common.sh@10 -- # set +x 00:39:39.128 true 00:39:39.128 08:36:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:39.128 08:36:11 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:39:39.128 08:36:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:39.128 08:36:11 -- common/autotest_common.sh@10 -- # set +x 00:39:39.128 true 00:39:39.128 08:36:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:39.128 08:36:11 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:39:39.128 08:36:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:39.128 08:36:11 -- common/autotest_common.sh@10 -- # set +x 00:39:39.128 true 00:39:39.128 08:36:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:39.128 08:36:11 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:39:39.128 08:36:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:39.128 08:36:11 -- common/autotest_common.sh@10 -- # set +x 00:39:39.128 true 00:39:39.128 08:36:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:39.128 08:36:11 -- target/initiator_timeout.sh@45 -- # sleep 3 00:39:41.687 08:36:14 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:39:41.687 08:36:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:41.687 08:36:14 -- common/autotest_common.sh@10 -- # set +x 00:39:41.687 true 00:39:41.687 08:36:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:41.687 08:36:14 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:39:41.687 08:36:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:41.687 08:36:14 -- common/autotest_common.sh@10 -- # set +x 00:39:41.687 true 00:39:41.687 08:36:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:41.687 08:36:14 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:39:41.687 08:36:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:41.687 08:36:14 -- common/autotest_common.sh@10 -- # set +x 00:39:41.687 true 00:39:41.687 08:36:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:41.687 08:36:14 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:39:41.687 08:36:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:39:41.687 08:36:14 -- common/autotest_common.sh@10 -- # set +x 00:39:41.687 true 00:39:41.687 08:36:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:39:41.687 08:36:14 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:39:41.687 08:36:14 -- target/initiator_timeout.sh@54 -- # wait 79431 00:40:37.924 00:40:37.924 job0: (groupid=0, jobs=1): err= 0: pid=79452: Wed Apr 17 08:37:09 2024 00:40:37.924 read: IOPS=933, BW=3736KiB/s (3825kB/s)(219MiB/60000msec) 00:40:37.924 slat (usec): min=6, max=14466, avg=11.92, stdev=69.48 00:40:37.924 clat (usec): min=6, max=40395k, avg=896.07, stdev=170646.99 00:40:37.924 lat (usec): min=138, max=40395k, avg=907.99, stdev=170647.00 00:40:37.924 clat percentiles (usec): 00:40:37.924 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 155], 00:40:37.924 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 169], 60.00th=[ 178], 00:40:37.924 | 70.00th=[ 188], 80.00th=[ 196], 90.00th=[ 208], 95.00th=[ 217], 00:40:37.924 | 99.00th=[ 235], 99.50th=[ 243], 99.90th=[ 269], 99.95th=[ 285], 00:40:37.924 | 99.99th=[ 1045] 00:40:37.924 write: IOPS=938, BW=3755KiB/s (3845kB/s)(220MiB/60000msec); 0 zone resets 00:40:37.924 slat (usec): min=10, max=899, avg=16.88, stdev= 6.96 00:40:37.924 clat (usec): min=3, max=7541, avg=142.65, stdev=40.47 00:40:37.924 lat (usec): min=122, max=7555, avg=159.53, stdev=41.29 00:40:37.924 clat percentiles (usec): 00:40:37.924 | 1.00th=[ 117], 5.00th=[ 121], 10.00th=[ 124], 20.00th=[ 127], 00:40:37.924 | 30.00th=[ 130], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 145], 00:40:37.924 | 70.00th=[ 151], 80.00th=[ 157], 90.00th=[ 167], 95.00th=[ 176], 00:40:37.924 | 99.00th=[ 196], 99.50th=[ 208], 99.90th=[ 351], 99.95th=[ 388], 00:40:37.924 | 99.99th=[ 930] 00:40:37.924 bw ( KiB/s): min= 5640, max=13648, per=100.00%, avg=11341.92, stdev=1666.98, samples=39 00:40:37.924 iops : min= 1410, max= 3412, avg=2835.46, stdev=416.78, samples=39 00:40:37.924 lat (usec) : 4=0.01%, 10=0.01%, 50=0.01%, 100=0.01%, 250=99.74% 00:40:37.924 lat (usec) : 500=0.23%, 750=0.01%, 1000=0.01% 00:40:37.924 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, >=2000=0.01% 00:40:37.924 cpu : usr=0.38%, sys=1.87%, ctx=112380, majf=0, minf=2 00:40:37.924 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:37.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.924 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.924 issued rwts: total=56035,56320,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.924 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:37.924 00:40:37.924 Run status group 0 (all jobs): 00:40:37.924 READ: bw=3736KiB/s (3825kB/s), 3736KiB/s-3736KiB/s (3825kB/s-3825kB/s), io=219MiB (230MB), run=60000-60000msec 00:40:37.924 WRITE: bw=3755KiB/s (3845kB/s), 3755KiB/s-3755KiB/s (3845kB/s-3845kB/s), io=220MiB (231MB), run=60000-60000msec 00:40:37.924 00:40:37.924 Disk stats (read/write): 00:40:37.924 nvme0n1: ios=56069/56051, merge=0/0, ticks=10149/8434, in_queue=18583, util=99.56% 00:40:37.924 08:37:09 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:37.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:37.924 08:37:09 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:37.924 08:37:09 -- common/autotest_common.sh@1198 -- # local i=0 00:40:37.924 08:37:09 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:40:37.924 08:37:09 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:37.924 08:37:09 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:40:37.924 08:37:09 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:37.924 nvmf hotplug test: fio successful as expected 00:40:37.924 08:37:09 -- common/autotest_common.sh@1210 -- # return 0 00:40:37.924 08:37:09 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:40:37.924 08:37:09 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:40:37.924 08:37:09 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:37.924 08:37:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:37.924 08:37:09 -- common/autotest_common.sh@10 -- # set +x 00:40:37.924 08:37:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:37.924 08:37:09 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:40:37.924 08:37:09 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:40:37.924 08:37:09 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:40:37.924 08:37:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:40:37.924 08:37:09 -- nvmf/common.sh@116 -- # sync 00:40:37.924 08:37:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:40:37.924 08:37:09 -- nvmf/common.sh@119 -- # set +e 00:40:37.924 08:37:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:40:37.924 08:37:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:40:37.924 rmmod nvme_tcp 00:40:37.924 rmmod nvme_fabrics 00:40:37.924 rmmod nvme_keyring 00:40:37.924 08:37:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:40:37.924 08:37:09 -- nvmf/common.sh@123 -- # set -e 00:40:37.924 08:37:09 -- nvmf/common.sh@124 -- # return 0 00:40:37.924 08:37:09 -- nvmf/common.sh@477 -- # '[' -n 79349 ']' 00:40:37.924 08:37:09 -- nvmf/common.sh@478 -- # killprocess 79349 00:40:37.924 08:37:09 -- common/autotest_common.sh@926 -- # '[' -z 79349 ']' 00:40:37.924 08:37:09 -- common/autotest_common.sh@930 -- # kill -0 79349 00:40:37.924 08:37:09 -- common/autotest_common.sh@931 -- # uname 00:40:37.925 08:37:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:40:37.925 08:37:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79349 00:40:37.925 08:37:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:40:37.925 08:37:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:40:37.925 killing process with pid 79349 00:40:37.925 08:37:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79349' 00:40:37.925 08:37:09 -- common/autotest_common.sh@945 -- # kill 79349 00:40:37.925 08:37:09 -- common/autotest_common.sh@950 -- # wait 79349 00:40:37.925 08:37:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:40:37.925 08:37:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:40:37.925 08:37:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:40:37.925 08:37:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:37.925 08:37:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:40:37.925 08:37:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:37.925 08:37:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:37.925 08:37:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:37.925 08:37:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:40:37.925 00:40:37.925 real 1m4.632s 00:40:37.925 user 4m9.161s 00:40:37.925 sys 0m6.102s 00:40:37.925 08:37:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:37.925 08:37:09 -- common/autotest_common.sh@10 -- # set +x 00:40:37.925 ************************************ 00:40:37.925 END TEST nvmf_initiator_timeout 00:40:37.925 ************************************ 00:40:37.925 08:37:09 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:40:37.925 08:37:09 -- nvmf/nvmf.sh@85 -- # timing_exit target 00:40:37.925 08:37:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:40:37.925 08:37:09 -- common/autotest_common.sh@10 -- # set +x 00:40:37.925 08:37:09 -- nvmf/nvmf.sh@87 -- # timing_enter host 00:40:37.925 08:37:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:40:37.925 08:37:09 -- common/autotest_common.sh@10 -- # set +x 00:40:37.925 08:37:09 -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:40:37.925 08:37:09 -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:40:37.925 08:37:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:40:37.925 08:37:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:40:37.925 08:37:09 -- common/autotest_common.sh@10 -- # set +x 00:40:37.925 ************************************ 00:40:37.925 START TEST nvmf_multicontroller 00:40:37.925 ************************************ 00:40:37.925 08:37:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:40:37.925 * Looking for test storage... 00:40:37.925 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:40:37.925 08:37:09 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:40:37.925 08:37:09 -- nvmf/common.sh@7 -- # uname -s 00:40:37.925 08:37:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:37.925 08:37:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:37.925 08:37:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:37.925 08:37:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:37.925 08:37:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:37.925 08:37:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:37.925 08:37:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:37.925 08:37:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:37.925 08:37:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:37.925 08:37:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:37.925 08:37:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:40:37.925 08:37:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:40:37.925 08:37:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:37.925 08:37:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:37.925 08:37:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:40:37.925 08:37:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:37.925 08:37:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:37.925 08:37:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:37.925 08:37:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:37.925 08:37:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:37.925 08:37:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:37.925 08:37:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:37.925 08:37:09 -- paths/export.sh@5 -- # export PATH 00:40:37.925 08:37:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:37.925 08:37:09 -- nvmf/common.sh@46 -- # : 0 00:40:37.925 08:37:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:40:37.925 08:37:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:40:37.925 08:37:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:40:37.925 08:37:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:37.925 08:37:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:37.925 08:37:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:40:37.925 08:37:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:40:37.925 08:37:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:40:37.925 08:37:09 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:37.925 08:37:09 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:37.925 08:37:09 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:40:37.925 08:37:09 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:40:37.925 08:37:09 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:40:37.925 08:37:09 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:40:37.925 08:37:09 -- host/multicontroller.sh@23 -- # nvmftestinit 00:40:37.925 08:37:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:40:37.925 08:37:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:37.925 08:37:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:40:37.925 08:37:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:40:37.925 08:37:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:40:37.925 08:37:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:37.925 08:37:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:37.925 08:37:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:37.925 08:37:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:40:37.925 08:37:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:40:37.925 08:37:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:40:37.925 08:37:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:40:37.925 08:37:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:40:37.925 08:37:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:40:37.925 08:37:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:37.925 08:37:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:37.925 08:37:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:40:37.925 08:37:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:40:37.925 08:37:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:40:37.925 08:37:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:40:37.925 08:37:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:40:37.925 08:37:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:37.925 08:37:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:40:37.925 08:37:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:40:37.925 08:37:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:40:37.925 08:37:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:40:37.925 08:37:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:40:37.925 08:37:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:40:37.925 Cannot find device "nvmf_tgt_br" 00:40:37.925 08:37:09 -- nvmf/common.sh@154 -- # true 00:40:37.925 08:37:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:40:37.925 Cannot find device "nvmf_tgt_br2" 00:40:37.925 08:37:09 -- nvmf/common.sh@155 -- # true 00:40:37.925 08:37:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:40:37.925 08:37:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:40:37.925 Cannot find device "nvmf_tgt_br" 00:40:37.925 08:37:09 -- nvmf/common.sh@157 -- # true 00:40:37.925 08:37:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:40:37.925 Cannot find device "nvmf_tgt_br2" 00:40:37.925 08:37:10 -- nvmf/common.sh@158 -- # true 00:40:37.925 08:37:10 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:40:37.925 08:37:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:40:37.925 08:37:10 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:40:37.925 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:37.925 08:37:10 -- nvmf/common.sh@161 -- # true 00:40:37.925 08:37:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:40:37.925 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:37.925 08:37:10 -- nvmf/common.sh@162 -- # true 00:40:37.925 08:37:10 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:40:37.925 08:37:10 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:40:37.926 08:37:10 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:40:37.926 08:37:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:40:37.926 08:37:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:40:37.926 08:37:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:40:37.926 08:37:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:40:37.926 08:37:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:40:37.926 08:37:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:40:37.926 08:37:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:40:37.926 08:37:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:40:37.926 08:37:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:40:37.926 08:37:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:40:37.926 08:37:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:40:37.926 08:37:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:40:37.926 08:37:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:40:37.926 08:37:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:40:37.926 08:37:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:40:37.926 08:37:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:40:37.926 08:37:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:40:37.926 08:37:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:40:37.926 08:37:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:40:37.926 08:37:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:40:37.926 08:37:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:40:37.926 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:37.926 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:40:37.926 00:40:37.926 --- 10.0.0.2 ping statistics --- 00:40:37.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:37.926 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:40:37.926 08:37:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:40:37.926 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:40:37.926 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.146 ms 00:40:37.926 00:40:37.926 --- 10.0.0.3 ping statistics --- 00:40:37.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:37.926 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:40:37.926 08:37:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:40:37.926 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:37.926 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:40:37.926 00:40:37.926 --- 10.0.0.1 ping statistics --- 00:40:37.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:37.926 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:40:37.926 08:37:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:37.926 08:37:10 -- nvmf/common.sh@421 -- # return 0 00:40:37.926 08:37:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:40:37.926 08:37:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:37.926 08:37:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:40:37.926 08:37:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:40:37.926 08:37:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:37.926 08:37:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:40:37.926 08:37:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:40:37.926 08:37:10 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:40:37.926 08:37:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:40:37.926 08:37:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:40:37.926 08:37:10 -- common/autotest_common.sh@10 -- # set +x 00:40:37.926 08:37:10 -- nvmf/common.sh@469 -- # nvmfpid=80280 00:40:37.926 08:37:10 -- nvmf/common.sh@470 -- # waitforlisten 80280 00:40:37.926 08:37:10 -- common/autotest_common.sh@819 -- # '[' -z 80280 ']' 00:40:37.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:37.926 08:37:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:37.926 08:37:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:40:37.926 08:37:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:40:37.926 08:37:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:37.926 08:37:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:40:37.926 08:37:10 -- common/autotest_common.sh@10 -- # set +x 00:40:37.926 [2024-04-17 08:37:10.446382] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:40:37.926 [2024-04-17 08:37:10.446477] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:37.926 [2024-04-17 08:37:10.590444] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:37.926 [2024-04-17 08:37:10.692547] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:40:37.926 [2024-04-17 08:37:10.692676] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:37.926 [2024-04-17 08:37:10.692684] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:37.926 [2024-04-17 08:37:10.692690] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:37.926 [2024-04-17 08:37:10.692935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:40:37.926 [2024-04-17 08:37:10.693145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:40:37.926 [2024-04-17 08:37:10.693149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:40:38.184 08:37:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:40:38.184 08:37:11 -- common/autotest_common.sh@852 -- # return 0 00:40:38.184 08:37:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:40:38.184 08:37:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:40:38.184 08:37:11 -- common/autotest_common.sh@10 -- # set +x 00:40:38.184 08:37:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:38.184 08:37:11 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:38.184 08:37:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:38.184 08:37:11 -- common/autotest_common.sh@10 -- # set +x 00:40:38.184 [2024-04-17 08:37:11.377753] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:38.184 08:37:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:38.184 08:37:11 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:38.184 08:37:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:38.184 08:37:11 -- common/autotest_common.sh@10 -- # set +x 00:40:38.184 Malloc0 00:40:38.184 08:37:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:38.184 08:37:11 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:38.184 08:37:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:38.184 08:37:11 -- common/autotest_common.sh@10 -- # set +x 00:40:38.184 08:37:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:38.184 08:37:11 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:38.184 08:37:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:38.184 08:37:11 -- common/autotest_common.sh@10 -- # set +x 00:40:38.184 08:37:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:38.184 08:37:11 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:38.184 08:37:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:38.184 08:37:11 -- common/autotest_common.sh@10 -- # set +x 00:40:38.184 [2024-04-17 08:37:11.443420] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:38.184 08:37:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:38.184 08:37:11 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:40:38.184 08:37:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:38.184 08:37:11 -- common/autotest_common.sh@10 -- # set +x 00:40:38.184 [2024-04-17 08:37:11.451338] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:40:38.184 08:37:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:38.184 08:37:11 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:40:38.184 08:37:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:38.184 08:37:11 -- common/autotest_common.sh@10 -- # set +x 00:40:38.184 Malloc1 00:40:38.184 08:37:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:38.184 08:37:11 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:40:38.185 08:37:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:38.185 08:37:11 -- common/autotest_common.sh@10 -- # set +x 00:40:38.185 08:37:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:38.185 08:37:11 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:40:38.185 08:37:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:38.185 08:37:11 -- common/autotest_common.sh@10 -- # set +x 00:40:38.185 08:37:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:38.185 08:37:11 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:40:38.185 08:37:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:38.185 08:37:11 -- common/autotest_common.sh@10 -- # set +x 00:40:38.185 08:37:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:38.185 08:37:11 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:40:38.185 08:37:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:38.185 08:37:11 -- common/autotest_common.sh@10 -- # set +x 00:40:38.185 08:37:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:38.185 08:37:11 -- host/multicontroller.sh@44 -- # bdevperf_pid=80336 00:40:38.185 08:37:11 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:40:38.185 08:37:11 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:38.185 08:37:11 -- host/multicontroller.sh@47 -- # waitforlisten 80336 /var/tmp/bdevperf.sock 00:40:38.185 08:37:11 -- common/autotest_common.sh@819 -- # '[' -z 80336 ']' 00:40:38.185 08:37:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:38.185 08:37:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:40:38.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:38.185 08:37:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:38.185 08:37:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:40:38.185 08:37:11 -- common/autotest_common.sh@10 -- # set +x 00:40:39.559 08:37:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:40:39.559 08:37:12 -- common/autotest_common.sh@852 -- # return 0 00:40:39.559 08:37:12 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:40:39.559 08:37:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:39.559 08:37:12 -- common/autotest_common.sh@10 -- # set +x 00:40:39.559 NVMe0n1 00:40:39.559 08:37:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:39.559 08:37:12 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:40:39.559 08:37:12 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:40:39.559 08:37:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:39.559 08:37:12 -- common/autotest_common.sh@10 -- # set +x 00:40:39.559 08:37:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:39.559 1 00:40:39.559 08:37:12 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:40:39.559 08:37:12 -- common/autotest_common.sh@640 -- # local es=0 00:40:39.559 08:37:12 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:40:39.559 08:37:12 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:40:39.559 08:37:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:40:39.559 08:37:12 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:40:39.559 08:37:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:40:39.559 08:37:12 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:40:39.559 08:37:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:39.559 08:37:12 -- common/autotest_common.sh@10 -- # set +x 00:40:39.559 2024/04/17 08:37:12 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:40:39.559 request: 00:40:39.559 { 00:40:39.559 "method": "bdev_nvme_attach_controller", 00:40:39.559 "params": { 00:40:39.559 "name": "NVMe0", 00:40:39.559 "trtype": "tcp", 00:40:39.559 "traddr": "10.0.0.2", 00:40:39.559 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:40:39.559 "hostaddr": "10.0.0.2", 00:40:39.559 "hostsvcid": "60000", 00:40:39.559 "adrfam": "ipv4", 00:40:39.559 "trsvcid": "4420", 00:40:39.559 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:40:39.559 } 00:40:39.559 } 00:40:39.559 Got JSON-RPC error response 00:40:39.559 GoRPCClient: error on JSON-RPC call 00:40:39.560 08:37:12 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:40:39.560 08:37:12 -- common/autotest_common.sh@643 -- # es=1 00:40:39.560 08:37:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:40:39.560 08:37:12 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:40:39.560 08:37:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:40:39.560 08:37:12 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:40:39.560 08:37:12 -- common/autotest_common.sh@640 -- # local es=0 00:40:39.560 08:37:12 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:40:39.560 08:37:12 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:40:39.560 08:37:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:40:39.560 08:37:12 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:40:39.560 08:37:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:40:39.560 08:37:12 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:40:39.560 08:37:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:39.560 08:37:12 -- common/autotest_common.sh@10 -- # set +x 00:40:39.560 2024/04/17 08:37:12 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:40:39.560 request: 00:40:39.560 { 00:40:39.560 "method": "bdev_nvme_attach_controller", 00:40:39.560 "params": { 00:40:39.560 "name": "NVMe0", 00:40:39.560 "trtype": "tcp", 00:40:39.560 "traddr": "10.0.0.2", 00:40:39.560 "hostaddr": "10.0.0.2", 00:40:39.560 "hostsvcid": "60000", 00:40:39.560 "adrfam": "ipv4", 00:40:39.560 "trsvcid": "4420", 00:40:39.560 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:40:39.560 } 00:40:39.560 } 00:40:39.560 Got JSON-RPC error response 00:40:39.560 GoRPCClient: error on JSON-RPC call 00:40:39.560 08:37:12 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:40:39.560 08:37:12 -- common/autotest_common.sh@643 -- # es=1 00:40:39.560 08:37:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:40:39.560 08:37:12 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:40:39.560 08:37:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:40:39.560 08:37:12 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:40:39.560 08:37:12 -- common/autotest_common.sh@640 -- # local es=0 00:40:39.560 08:37:12 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:40:39.560 08:37:12 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:40:39.560 08:37:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:40:39.560 08:37:12 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:40:39.560 08:37:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:40:39.560 08:37:12 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:40:39.560 08:37:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:39.560 08:37:12 -- common/autotest_common.sh@10 -- # set +x 00:40:39.560 2024/04/17 08:37:12 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:40:39.560 request: 00:40:39.560 { 00:40:39.560 "method": "bdev_nvme_attach_controller", 00:40:39.560 "params": { 00:40:39.560 "name": "NVMe0", 00:40:39.560 "trtype": "tcp", 00:40:39.560 "traddr": "10.0.0.2", 00:40:39.560 "hostaddr": "10.0.0.2", 00:40:39.560 "hostsvcid": "60000", 00:40:39.560 "adrfam": "ipv4", 00:40:39.560 "trsvcid": "4420", 00:40:39.560 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:39.560 "multipath": "disable" 00:40:39.560 } 00:40:39.560 } 00:40:39.560 Got JSON-RPC error response 00:40:39.560 GoRPCClient: error on JSON-RPC call 00:40:39.560 08:37:12 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:40:39.560 08:37:12 -- common/autotest_common.sh@643 -- # es=1 00:40:39.560 08:37:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:40:39.560 08:37:12 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:40:39.560 08:37:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:40:39.560 08:37:12 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:40:39.560 08:37:12 -- common/autotest_common.sh@640 -- # local es=0 00:40:39.560 08:37:12 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:40:39.560 08:37:12 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:40:39.560 08:37:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:40:39.560 08:37:12 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:40:39.560 08:37:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:40:39.560 08:37:12 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:40:39.560 08:37:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:39.560 08:37:12 -- common/autotest_common.sh@10 -- # set +x 00:40:39.561 2024/04/17 08:37:12 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:40:39.561 request: 00:40:39.561 { 00:40:39.561 "method": "bdev_nvme_attach_controller", 00:40:39.561 "params": { 00:40:39.561 "name": "NVMe0", 00:40:39.561 "trtype": "tcp", 00:40:39.561 "traddr": "10.0.0.2", 00:40:39.561 "hostaddr": "10.0.0.2", 00:40:39.561 "hostsvcid": "60000", 00:40:39.561 "adrfam": "ipv4", 00:40:39.561 "trsvcid": "4420", 00:40:39.561 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:39.561 "multipath": "failover" 00:40:39.561 } 00:40:39.561 } 00:40:39.561 Got JSON-RPC error response 00:40:39.561 GoRPCClient: error on JSON-RPC call 00:40:39.561 08:37:12 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:40:39.561 08:37:12 -- common/autotest_common.sh@643 -- # es=1 00:40:39.561 08:37:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:40:39.561 08:37:12 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:40:39.561 08:37:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:40:39.561 08:37:12 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:40:39.561 08:37:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:39.561 08:37:12 -- common/autotest_common.sh@10 -- # set +x 00:40:39.561 00:40:39.561 08:37:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:39.561 08:37:12 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:40:39.561 08:37:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:39.561 08:37:12 -- common/autotest_common.sh@10 -- # set +x 00:40:39.561 08:37:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:39.561 08:37:12 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:40:39.561 08:37:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:39.561 08:37:12 -- common/autotest_common.sh@10 -- # set +x 00:40:39.561 00:40:39.561 08:37:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:39.561 08:37:12 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:40:39.561 08:37:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:39.561 08:37:12 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:40:39.561 08:37:12 -- common/autotest_common.sh@10 -- # set +x 00:40:39.561 08:37:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:39.561 08:37:12 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:40:39.561 08:37:12 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:40.933 0 00:40:40.933 08:37:13 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:40:40.933 08:37:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:40.933 08:37:13 -- common/autotest_common.sh@10 -- # set +x 00:40:40.933 08:37:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:40.933 08:37:13 -- host/multicontroller.sh@100 -- # killprocess 80336 00:40:40.933 08:37:13 -- common/autotest_common.sh@926 -- # '[' -z 80336 ']' 00:40:40.933 08:37:13 -- common/autotest_common.sh@930 -- # kill -0 80336 00:40:40.933 08:37:13 -- common/autotest_common.sh@931 -- # uname 00:40:40.933 08:37:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:40:40.933 08:37:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 80336 00:40:40.933 08:37:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:40:40.933 08:37:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:40:40.933 08:37:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 80336' 00:40:40.933 killing process with pid 80336 00:40:40.933 08:37:14 -- common/autotest_common.sh@945 -- # kill 80336 00:40:40.933 08:37:14 -- common/autotest_common.sh@950 -- # wait 80336 00:40:40.933 08:37:14 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:40.933 08:37:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:40.933 08:37:14 -- common/autotest_common.sh@10 -- # set +x 00:40:40.933 08:37:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:40.933 08:37:14 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:40:40.933 08:37:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:40.933 08:37:14 -- common/autotest_common.sh@10 -- # set +x 00:40:40.933 08:37:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:40.933 08:37:14 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:40:40.933 08:37:14 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:40:40.933 08:37:14 -- common/autotest_common.sh@1597 -- # read -r file 00:40:40.933 08:37:14 -- common/autotest_common.sh@1596 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:40:40.933 08:37:14 -- common/autotest_common.sh@1596 -- # sort -u 00:40:40.933 08:37:14 -- common/autotest_common.sh@1598 -- # cat 00:40:41.192 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:40:41.192 [2024-04-17 08:37:11.560071] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:40:41.192 [2024-04-17 08:37:11.560187] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80336 ] 00:40:41.192 [2024-04-17 08:37:11.687051] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:41.192 [2024-04-17 08:37:11.791461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:41.192 [2024-04-17 08:37:12.784642] bdev.c:4548:bdev_name_add: *ERROR*: Bdev name 301d2610-7cc7-4f35-825f-4b1cd98cea67 already exists 00:40:41.192 [2024-04-17 08:37:12.784700] bdev.c:7598:bdev_register: *ERROR*: Unable to add uuid:301d2610-7cc7-4f35-825f-4b1cd98cea67 alias for bdev NVMe1n1 00:40:41.192 [2024-04-17 08:37:12.784714] bdev_nvme.c:4230:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:40:41.192 Running I/O for 1 seconds... 00:40:41.192 00:40:41.192 Latency(us) 00:40:41.192 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:41.192 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:40:41.192 NVMe0n1 : 1.00 21174.35 82.71 0.00 0.00 6036.75 3405.58 11447.34 00:40:41.192 =================================================================================================================== 00:40:41.192 Total : 21174.35 82.71 0.00 0.00 6036.75 3405.58 11447.34 00:40:41.192 Received shutdown signal, test time was about 1.000000 seconds 00:40:41.192 00:40:41.192 Latency(us) 00:40:41.192 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:41.192 =================================================================================================================== 00:40:41.192 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:41.192 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:40:41.192 08:37:14 -- common/autotest_common.sh@1603 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:40:41.192 08:37:14 -- common/autotest_common.sh@1597 -- # read -r file 00:40:41.192 08:37:14 -- host/multicontroller.sh@108 -- # nvmftestfini 00:40:41.192 08:37:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:40:41.192 08:37:14 -- nvmf/common.sh@116 -- # sync 00:40:41.192 08:37:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:40:41.192 08:37:14 -- nvmf/common.sh@119 -- # set +e 00:40:41.192 08:37:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:40:41.192 08:37:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:40:41.192 rmmod nvme_tcp 00:40:41.192 rmmod nvme_fabrics 00:40:41.192 rmmod nvme_keyring 00:40:41.192 08:37:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:40:41.192 08:37:14 -- nvmf/common.sh@123 -- # set -e 00:40:41.192 08:37:14 -- nvmf/common.sh@124 -- # return 0 00:40:41.192 08:37:14 -- nvmf/common.sh@477 -- # '[' -n 80280 ']' 00:40:41.192 08:37:14 -- nvmf/common.sh@478 -- # killprocess 80280 00:40:41.192 08:37:14 -- common/autotest_common.sh@926 -- # '[' -z 80280 ']' 00:40:41.192 08:37:14 -- common/autotest_common.sh@930 -- # kill -0 80280 00:40:41.192 08:37:14 -- common/autotest_common.sh@931 -- # uname 00:40:41.192 08:37:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:40:41.192 08:37:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 80280 00:40:41.192 08:37:14 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:40:41.192 killing process with pid 80280 00:40:41.192 08:37:14 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:40:41.192 08:37:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 80280' 00:40:41.192 08:37:14 -- common/autotest_common.sh@945 -- # kill 80280 00:40:41.192 08:37:14 -- common/autotest_common.sh@950 -- # wait 80280 00:40:41.450 08:37:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:40:41.450 08:37:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:40:41.450 08:37:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:40:41.450 08:37:14 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:41.450 08:37:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:40:41.450 08:37:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:41.450 08:37:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:41.450 08:37:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:41.450 08:37:14 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:40:41.450 00:40:41.450 real 0m4.962s 00:40:41.450 user 0m15.000s 00:40:41.450 sys 0m1.135s 00:40:41.450 08:37:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:41.450 08:37:14 -- common/autotest_common.sh@10 -- # set +x 00:40:41.450 ************************************ 00:40:41.450 END TEST nvmf_multicontroller 00:40:41.450 ************************************ 00:40:41.709 08:37:14 -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:40:41.709 08:37:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:40:41.709 08:37:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:40:41.709 08:37:14 -- common/autotest_common.sh@10 -- # set +x 00:40:41.709 ************************************ 00:40:41.709 START TEST nvmf_aer 00:40:41.709 ************************************ 00:40:41.709 08:37:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:40:41.709 * Looking for test storage... 00:40:41.709 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:40:41.709 08:37:14 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:40:41.709 08:37:14 -- nvmf/common.sh@7 -- # uname -s 00:40:41.709 08:37:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:41.709 08:37:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:41.710 08:37:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:41.710 08:37:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:41.710 08:37:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:41.710 08:37:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:41.710 08:37:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:41.710 08:37:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:41.710 08:37:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:41.710 08:37:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:41.710 08:37:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:40:41.710 08:37:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:40:41.710 08:37:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:41.710 08:37:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:41.710 08:37:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:40:41.710 08:37:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:41.710 08:37:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:41.710 08:37:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:41.710 08:37:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:41.710 08:37:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:41.710 08:37:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:41.710 08:37:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:41.710 08:37:14 -- paths/export.sh@5 -- # export PATH 00:40:41.710 08:37:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:41.710 08:37:14 -- nvmf/common.sh@46 -- # : 0 00:40:41.710 08:37:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:40:41.710 08:37:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:40:41.710 08:37:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:40:41.710 08:37:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:41.710 08:37:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:41.710 08:37:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:40:41.710 08:37:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:40:41.710 08:37:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:40:41.710 08:37:14 -- host/aer.sh@11 -- # nvmftestinit 00:40:41.710 08:37:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:40:41.710 08:37:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:41.710 08:37:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:40:41.710 08:37:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:40:41.710 08:37:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:40:41.710 08:37:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:41.710 08:37:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:41.710 08:37:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:41.710 08:37:14 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:40:41.710 08:37:14 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:40:41.710 08:37:14 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:40:41.710 08:37:14 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:40:41.710 08:37:14 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:40:41.710 08:37:14 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:40:41.710 08:37:14 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:41.710 08:37:14 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:41.710 08:37:14 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:40:41.710 08:37:14 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:40:41.710 08:37:14 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:40:41.710 08:37:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:40:41.710 08:37:14 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:40:41.710 08:37:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:41.710 08:37:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:40:41.710 08:37:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:40:41.710 08:37:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:40:41.710 08:37:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:40:41.710 08:37:14 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:40:41.710 08:37:14 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:40:41.710 Cannot find device "nvmf_tgt_br" 00:40:41.710 08:37:14 -- nvmf/common.sh@154 -- # true 00:40:41.710 08:37:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:40:41.710 Cannot find device "nvmf_tgt_br2" 00:40:41.710 08:37:15 -- nvmf/common.sh@155 -- # true 00:40:41.710 08:37:15 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:40:41.710 08:37:15 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:40:41.710 Cannot find device "nvmf_tgt_br" 00:40:41.710 08:37:15 -- nvmf/common.sh@157 -- # true 00:40:41.710 08:37:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:40:41.710 Cannot find device "nvmf_tgt_br2" 00:40:41.710 08:37:15 -- nvmf/common.sh@158 -- # true 00:40:41.710 08:37:15 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:40:41.969 08:37:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:40:41.969 08:37:15 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:40:41.969 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:41.969 08:37:15 -- nvmf/common.sh@161 -- # true 00:40:41.969 08:37:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:40:41.969 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:41.969 08:37:15 -- nvmf/common.sh@162 -- # true 00:40:41.969 08:37:15 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:40:41.969 08:37:15 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:40:41.969 08:37:15 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:40:41.969 08:37:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:40:41.969 08:37:15 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:40:41.969 08:37:15 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:40:41.969 08:37:15 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:40:41.969 08:37:15 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:40:41.969 08:37:15 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:40:41.969 08:37:15 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:40:41.969 08:37:15 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:40:41.969 08:37:15 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:40:41.969 08:37:15 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:40:41.969 08:37:15 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:40:41.969 08:37:15 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:40:41.969 08:37:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:40:41.969 08:37:15 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:40:41.969 08:37:15 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:40:41.969 08:37:15 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:40:41.969 08:37:15 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:40:41.969 08:37:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:40:41.969 08:37:15 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:40:41.969 08:37:15 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:40:41.969 08:37:15 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:40:41.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:41.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:40:41.969 00:40:41.969 --- 10.0.0.2 ping statistics --- 00:40:41.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:41.969 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:40:41.969 08:37:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:40:41.969 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:40:41.969 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:40:41.969 00:40:41.969 --- 10.0.0.3 ping statistics --- 00:40:41.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:41.969 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:40:41.969 08:37:15 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:40:41.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:41.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:40:41.969 00:40:41.969 --- 10.0.0.1 ping statistics --- 00:40:41.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:41.969 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:40:41.969 08:37:15 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:41.969 08:37:15 -- nvmf/common.sh@421 -- # return 0 00:40:41.969 08:37:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:40:41.969 08:37:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:41.969 08:37:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:40:41.969 08:37:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:40:41.969 08:37:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:41.969 08:37:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:40:41.969 08:37:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:40:42.228 08:37:15 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:40:42.228 08:37:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:40:42.228 08:37:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:40:42.228 08:37:15 -- common/autotest_common.sh@10 -- # set +x 00:40:42.228 08:37:15 -- nvmf/common.sh@469 -- # nvmfpid=80586 00:40:42.228 08:37:15 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:40:42.228 08:37:15 -- nvmf/common.sh@470 -- # waitforlisten 80586 00:40:42.228 08:37:15 -- common/autotest_common.sh@819 -- # '[' -z 80586 ']' 00:40:42.228 08:37:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:42.228 08:37:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:40:42.228 08:37:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:42.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:42.228 08:37:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:40:42.228 08:37:15 -- common/autotest_common.sh@10 -- # set +x 00:40:42.228 [2024-04-17 08:37:15.381935] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:40:42.229 [2024-04-17 08:37:15.382005] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:42.229 [2024-04-17 08:37:15.520137] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:42.488 [2024-04-17 08:37:15.627086] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:40:42.488 [2024-04-17 08:37:15.627223] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:42.488 [2024-04-17 08:37:15.627231] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:42.488 [2024-04-17 08:37:15.627237] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:42.488 [2024-04-17 08:37:15.627470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:40:42.488 [2024-04-17 08:37:15.627711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:42.488 [2024-04-17 08:37:15.627595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:40:42.488 [2024-04-17 08:37:15.627716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:40:43.056 08:37:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:40:43.056 08:37:16 -- common/autotest_common.sh@852 -- # return 0 00:40:43.056 08:37:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:40:43.056 08:37:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:40:43.056 08:37:16 -- common/autotest_common.sh@10 -- # set +x 00:40:43.056 08:37:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:43.056 08:37:16 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:43.056 08:37:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:43.056 08:37:16 -- common/autotest_common.sh@10 -- # set +x 00:40:43.314 [2024-04-17 08:37:16.391803] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:43.314 08:37:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:43.314 08:37:16 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:40:43.314 08:37:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:43.314 08:37:16 -- common/autotest_common.sh@10 -- # set +x 00:40:43.314 Malloc0 00:40:43.314 08:37:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:43.314 08:37:16 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:40:43.314 08:37:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:43.314 08:37:16 -- common/autotest_common.sh@10 -- # set +x 00:40:43.314 08:37:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:43.314 08:37:16 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:43.314 08:37:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:43.314 08:37:16 -- common/autotest_common.sh@10 -- # set +x 00:40:43.314 08:37:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:43.314 08:37:16 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:43.314 08:37:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:43.314 08:37:16 -- common/autotest_common.sh@10 -- # set +x 00:40:43.314 [2024-04-17 08:37:16.461117] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:43.314 08:37:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:43.314 08:37:16 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:40:43.314 08:37:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:43.314 08:37:16 -- common/autotest_common.sh@10 -- # set +x 00:40:43.314 [2024-04-17 08:37:16.472899] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:40:43.314 [ 00:40:43.314 { 00:40:43.314 "allow_any_host": true, 00:40:43.314 "hosts": [], 00:40:43.314 "listen_addresses": [], 00:40:43.314 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:40:43.314 "subtype": "Discovery" 00:40:43.314 }, 00:40:43.314 { 00:40:43.314 "allow_any_host": true, 00:40:43.314 "hosts": [], 00:40:43.314 "listen_addresses": [ 00:40:43.314 { 00:40:43.314 "adrfam": "IPv4", 00:40:43.314 "traddr": "10.0.0.2", 00:40:43.314 "transport": "TCP", 00:40:43.314 "trsvcid": "4420", 00:40:43.314 "trtype": "TCP" 00:40:43.314 } 00:40:43.314 ], 00:40:43.314 "max_cntlid": 65519, 00:40:43.314 "max_namespaces": 2, 00:40:43.314 "min_cntlid": 1, 00:40:43.314 "model_number": "SPDK bdev Controller", 00:40:43.314 "namespaces": [ 00:40:43.314 { 00:40:43.314 "bdev_name": "Malloc0", 00:40:43.314 "name": "Malloc0", 00:40:43.314 "nguid": "8F221CCB1CD94CBDB80C77344A9C7AD2", 00:40:43.314 "nsid": 1, 00:40:43.314 "uuid": "8f221ccb-1cd9-4cbd-b80c-77344a9c7ad2" 00:40:43.314 } 00:40:43.314 ], 00:40:43.314 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:40:43.314 "serial_number": "SPDK00000000000001", 00:40:43.314 "subtype": "NVMe" 00:40:43.314 } 00:40:43.314 ] 00:40:43.314 08:37:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:43.314 08:37:16 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:40:43.314 08:37:16 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:40:43.314 08:37:16 -- host/aer.sh@33 -- # aerpid=80641 00:40:43.314 08:37:16 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:40:43.314 08:37:16 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:40:43.314 08:37:16 -- common/autotest_common.sh@1244 -- # local i=0 00:40:43.314 08:37:16 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:40:43.314 08:37:16 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:40:43.314 08:37:16 -- common/autotest_common.sh@1247 -- # i=1 00:40:43.314 08:37:16 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:40:43.314 08:37:16 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:40:43.314 08:37:16 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:40:43.314 08:37:16 -- common/autotest_common.sh@1247 -- # i=2 00:40:43.315 08:37:16 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:40:43.574 08:37:16 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:40:43.574 08:37:16 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:40:43.574 08:37:16 -- common/autotest_common.sh@1255 -- # return 0 00:40:43.574 08:37:16 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:40:43.574 08:37:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:43.574 08:37:16 -- common/autotest_common.sh@10 -- # set +x 00:40:43.574 Malloc1 00:40:43.574 08:37:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:43.574 08:37:16 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:40:43.574 08:37:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:43.574 08:37:16 -- common/autotest_common.sh@10 -- # set +x 00:40:43.574 08:37:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:43.574 08:37:16 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:40:43.574 08:37:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:43.574 08:37:16 -- common/autotest_common.sh@10 -- # set +x 00:40:43.574 [ 00:40:43.574 { 00:40:43.574 "allow_any_host": true, 00:40:43.574 "hosts": [], 00:40:43.574 "listen_addresses": [], 00:40:43.574 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:40:43.574 "subtype": "Discovery" 00:40:43.574 }, 00:40:43.574 { 00:40:43.574 "allow_any_host": true, 00:40:43.574 "hosts": [], 00:40:43.574 "listen_addresses": [ 00:40:43.574 { 00:40:43.574 "adrfam": "IPv4", 00:40:43.574 "traddr": "10.0.0.2", 00:40:43.574 "transport": "TCP", 00:40:43.574 "trsvcid": "4420", 00:40:43.574 "trtype": "TCP" 00:40:43.574 } 00:40:43.574 ], 00:40:43.574 "max_cntlid": 65519, 00:40:43.574 "max_namespaces": 2, 00:40:43.574 "min_cntlid": 1, 00:40:43.574 "model_number": "SPDK bdev Controller", 00:40:43.574 "namespaces": [ 00:40:43.574 { 00:40:43.574 "bdev_name": "Malloc0", 00:40:43.574 "name": "Malloc0", 00:40:43.574 "nguid": "8F221CCB1CD94CBDB80C77344A9C7AD2", 00:40:43.574 "nsid": 1, 00:40:43.574 "uuid": "8f221ccb-1cd9-4cbd-b80c-77344a9c7ad2" 00:40:43.574 }, 00:40:43.574 { 00:40:43.574 "bdev_name": "Malloc1", 00:40:43.574 "name": "Malloc1", 00:40:43.574 "nguid": "47202A3B5EFA4ECFB3CCB5B6736C3805", 00:40:43.574 "nsid": 2, 00:40:43.574 "uuid": "47202a3b-5efa-4ecf-b3cc-b5b6736c3805" 00:40:43.574 } 00:40:43.574 ], 00:40:43.574 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:40:43.574 "serial_number": "SPDK00000000000001", 00:40:43.574 "subtype": "NVMe" 00:40:43.574 } 00:40:43.574 ] 00:40:43.574 08:37:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:43.574 08:37:16 -- host/aer.sh@43 -- # wait 80641 00:40:43.574 Asynchronous Event Request test 00:40:43.574 Attaching to 10.0.0.2 00:40:43.574 Attached to 10.0.0.2 00:40:43.574 Registering asynchronous event callbacks... 00:40:43.574 Starting namespace attribute notice tests for all controllers... 00:40:43.574 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:40:43.574 aer_cb - Changed Namespace 00:40:43.574 Cleaning up... 00:40:43.574 08:37:16 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:40:43.574 08:37:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:43.574 08:37:16 -- common/autotest_common.sh@10 -- # set +x 00:40:43.574 08:37:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:43.574 08:37:16 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:40:43.574 08:37:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:43.574 08:37:16 -- common/autotest_common.sh@10 -- # set +x 00:40:43.574 08:37:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:43.574 08:37:16 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:43.574 08:37:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:43.574 08:37:16 -- common/autotest_common.sh@10 -- # set +x 00:40:43.574 08:37:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:43.574 08:37:16 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:40:43.574 08:37:16 -- host/aer.sh@51 -- # nvmftestfini 00:40:43.574 08:37:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:40:43.574 08:37:16 -- nvmf/common.sh@116 -- # sync 00:40:43.854 08:37:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:40:43.854 08:37:16 -- nvmf/common.sh@119 -- # set +e 00:40:43.854 08:37:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:40:43.854 08:37:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:40:43.854 rmmod nvme_tcp 00:40:43.854 rmmod nvme_fabrics 00:40:43.854 rmmod nvme_keyring 00:40:43.854 08:37:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:40:43.854 08:37:16 -- nvmf/common.sh@123 -- # set -e 00:40:43.854 08:37:16 -- nvmf/common.sh@124 -- # return 0 00:40:43.854 08:37:16 -- nvmf/common.sh@477 -- # '[' -n 80586 ']' 00:40:43.854 08:37:16 -- nvmf/common.sh@478 -- # killprocess 80586 00:40:43.854 08:37:16 -- common/autotest_common.sh@926 -- # '[' -z 80586 ']' 00:40:43.854 08:37:16 -- common/autotest_common.sh@930 -- # kill -0 80586 00:40:43.854 08:37:16 -- common/autotest_common.sh@931 -- # uname 00:40:43.854 08:37:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:40:43.854 08:37:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 80586 00:40:43.854 08:37:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:40:43.854 08:37:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:40:43.854 08:37:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 80586' 00:40:43.854 killing process with pid 80586 00:40:43.854 08:37:17 -- common/autotest_common.sh@945 -- # kill 80586 00:40:43.854 [2024-04-17 08:37:17.009138] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:40:43.854 08:37:17 -- common/autotest_common.sh@950 -- # wait 80586 00:40:44.113 08:37:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:40:44.113 08:37:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:40:44.113 08:37:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:40:44.113 08:37:17 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:44.113 08:37:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:40:44.113 08:37:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:44.113 08:37:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:44.113 08:37:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:44.113 08:37:17 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:40:44.113 ************************************ 00:40:44.113 END TEST nvmf_aer 00:40:44.113 ************************************ 00:40:44.113 00:40:44.113 real 0m2.488s 00:40:44.113 user 0m6.690s 00:40:44.113 sys 0m0.678s 00:40:44.113 08:37:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:44.113 08:37:17 -- common/autotest_common.sh@10 -- # set +x 00:40:44.113 08:37:17 -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:40:44.113 08:37:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:40:44.113 08:37:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:40:44.113 08:37:17 -- common/autotest_common.sh@10 -- # set +x 00:40:44.113 ************************************ 00:40:44.113 START TEST nvmf_async_init 00:40:44.113 ************************************ 00:40:44.113 08:37:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:40:44.372 * Looking for test storage... 00:40:44.372 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:40:44.372 08:37:17 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:40:44.372 08:37:17 -- nvmf/common.sh@7 -- # uname -s 00:40:44.372 08:37:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:44.372 08:37:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:44.372 08:37:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:44.372 08:37:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:44.372 08:37:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:44.372 08:37:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:44.372 08:37:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:44.372 08:37:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:44.372 08:37:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:44.372 08:37:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:44.372 08:37:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:40:44.372 08:37:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:40:44.372 08:37:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:44.372 08:37:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:44.372 08:37:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:40:44.372 08:37:17 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:44.372 08:37:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:44.372 08:37:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:44.372 08:37:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:44.372 08:37:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:44.373 08:37:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:44.373 08:37:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:44.373 08:37:17 -- paths/export.sh@5 -- # export PATH 00:40:44.373 08:37:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:44.373 08:37:17 -- nvmf/common.sh@46 -- # : 0 00:40:44.373 08:37:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:40:44.373 08:37:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:40:44.373 08:37:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:40:44.373 08:37:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:44.373 08:37:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:44.373 08:37:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:40:44.373 08:37:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:40:44.373 08:37:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:40:44.373 08:37:17 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:40:44.373 08:37:17 -- host/async_init.sh@14 -- # null_block_size=512 00:40:44.373 08:37:17 -- host/async_init.sh@15 -- # null_bdev=null0 00:40:44.373 08:37:17 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:40:44.373 08:37:17 -- host/async_init.sh@20 -- # uuidgen 00:40:44.373 08:37:17 -- host/async_init.sh@20 -- # tr -d - 00:40:44.373 08:37:17 -- host/async_init.sh@20 -- # nguid=a10686ac7804413181a6ec5ce503951a 00:40:44.373 08:37:17 -- host/async_init.sh@22 -- # nvmftestinit 00:40:44.373 08:37:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:40:44.373 08:37:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:44.373 08:37:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:40:44.373 08:37:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:40:44.373 08:37:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:40:44.373 08:37:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:44.373 08:37:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:44.373 08:37:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:44.373 08:37:17 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:40:44.373 08:37:17 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:40:44.373 08:37:17 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:40:44.373 08:37:17 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:40:44.373 08:37:17 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:40:44.373 08:37:17 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:40:44.373 08:37:17 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:44.373 08:37:17 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:44.373 08:37:17 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:40:44.373 08:37:17 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:40:44.373 08:37:17 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:40:44.373 08:37:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:40:44.373 08:37:17 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:40:44.373 08:37:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:44.373 08:37:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:40:44.373 08:37:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:40:44.373 08:37:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:40:44.373 08:37:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:40:44.373 08:37:17 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:40:44.373 08:37:17 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:40:44.373 Cannot find device "nvmf_tgt_br" 00:40:44.373 08:37:17 -- nvmf/common.sh@154 -- # true 00:40:44.373 08:37:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:40:44.373 Cannot find device "nvmf_tgt_br2" 00:40:44.373 08:37:17 -- nvmf/common.sh@155 -- # true 00:40:44.373 08:37:17 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:40:44.373 08:37:17 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:40:44.373 Cannot find device "nvmf_tgt_br" 00:40:44.373 08:37:17 -- nvmf/common.sh@157 -- # true 00:40:44.373 08:37:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:40:44.373 Cannot find device "nvmf_tgt_br2" 00:40:44.373 08:37:17 -- nvmf/common.sh@158 -- # true 00:40:44.373 08:37:17 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:40:44.373 08:37:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:40:44.632 08:37:17 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:40:44.632 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:44.632 08:37:17 -- nvmf/common.sh@161 -- # true 00:40:44.632 08:37:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:40:44.632 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:44.632 08:37:17 -- nvmf/common.sh@162 -- # true 00:40:44.632 08:37:17 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:40:44.632 08:37:17 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:40:44.632 08:37:17 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:40:44.632 08:37:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:40:44.632 08:37:17 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:40:44.632 08:37:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:40:44.632 08:37:17 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:40:44.632 08:37:17 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:40:44.632 08:37:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:40:44.632 08:37:17 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:40:44.632 08:37:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:40:44.632 08:37:17 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:40:44.632 08:37:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:40:44.632 08:37:17 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:40:44.632 08:37:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:40:44.632 08:37:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:40:44.632 08:37:17 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:40:44.632 08:37:17 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:40:44.632 08:37:17 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:40:44.632 08:37:17 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:40:44.632 08:37:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:40:44.632 08:37:17 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:40:44.632 08:37:17 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:40:44.632 08:37:17 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:40:44.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:44.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:40:44.632 00:40:44.632 --- 10.0.0.2 ping statistics --- 00:40:44.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:44.632 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:40:44.632 08:37:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:40:44.632 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:40:44.632 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:40:44.632 00:40:44.632 --- 10.0.0.3 ping statistics --- 00:40:44.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:44.632 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:40:44.632 08:37:17 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:40:44.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:44.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:40:44.632 00:40:44.632 --- 10.0.0.1 ping statistics --- 00:40:44.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:44.632 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:40:44.632 08:37:17 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:44.632 08:37:17 -- nvmf/common.sh@421 -- # return 0 00:40:44.632 08:37:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:40:44.632 08:37:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:44.632 08:37:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:40:44.632 08:37:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:40:44.632 08:37:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:44.632 08:37:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:40:44.632 08:37:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:40:44.632 08:37:17 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:40:44.632 08:37:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:40:44.632 08:37:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:40:44.632 08:37:17 -- common/autotest_common.sh@10 -- # set +x 00:40:44.632 08:37:17 -- nvmf/common.sh@469 -- # nvmfpid=80816 00:40:44.632 08:37:17 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:40:44.632 08:37:17 -- nvmf/common.sh@470 -- # waitforlisten 80816 00:40:44.632 08:37:17 -- common/autotest_common.sh@819 -- # '[' -z 80816 ']' 00:40:44.632 08:37:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:44.632 08:37:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:40:44.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:44.632 08:37:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:44.632 08:37:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:40:44.632 08:37:17 -- common/autotest_common.sh@10 -- # set +x 00:40:44.891 [2024-04-17 08:37:17.977792] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:40:44.891 [2024-04-17 08:37:17.977874] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:44.891 [2024-04-17 08:37:18.116223] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:45.149 [2024-04-17 08:37:18.222803] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:40:45.149 [2024-04-17 08:37:18.222965] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:45.149 [2024-04-17 08:37:18.222976] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:45.150 [2024-04-17 08:37:18.222983] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:45.150 [2024-04-17 08:37:18.223015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:45.717 08:37:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:40:45.717 08:37:18 -- common/autotest_common.sh@852 -- # return 0 00:40:45.717 08:37:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:40:45.717 08:37:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:40:45.717 08:37:18 -- common/autotest_common.sh@10 -- # set +x 00:40:45.717 08:37:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:45.717 08:37:18 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:40:45.717 08:37:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:45.717 08:37:18 -- common/autotest_common.sh@10 -- # set +x 00:40:45.717 [2024-04-17 08:37:18.909113] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:45.717 08:37:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:45.717 08:37:18 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:40:45.717 08:37:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:45.717 08:37:18 -- common/autotest_common.sh@10 -- # set +x 00:40:45.717 null0 00:40:45.717 08:37:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:45.717 08:37:18 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:40:45.717 08:37:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:45.717 08:37:18 -- common/autotest_common.sh@10 -- # set +x 00:40:45.717 08:37:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:45.717 08:37:18 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:40:45.717 08:37:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:45.717 08:37:18 -- common/autotest_common.sh@10 -- # set +x 00:40:45.717 08:37:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:45.717 08:37:18 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g a10686ac7804413181a6ec5ce503951a 00:40:45.717 08:37:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:45.717 08:37:18 -- common/autotest_common.sh@10 -- # set +x 00:40:45.717 08:37:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:45.717 08:37:18 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:45.717 08:37:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:45.717 08:37:18 -- common/autotest_common.sh@10 -- # set +x 00:40:45.717 [2024-04-17 08:37:18.969107] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:45.717 08:37:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:45.717 08:37:18 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:40:45.717 08:37:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:45.717 08:37:18 -- common/autotest_common.sh@10 -- # set +x 00:40:45.976 nvme0n1 00:40:45.976 08:37:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:45.976 08:37:19 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:40:45.976 08:37:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:45.976 08:37:19 -- common/autotest_common.sh@10 -- # set +x 00:40:45.976 [ 00:40:45.976 { 00:40:45.976 "aliases": [ 00:40:45.976 "a10686ac-7804-4131-81a6-ec5ce503951a" 00:40:45.976 ], 00:40:45.976 "assigned_rate_limits": { 00:40:45.976 "r_mbytes_per_sec": 0, 00:40:45.976 "rw_ios_per_sec": 0, 00:40:45.976 "rw_mbytes_per_sec": 0, 00:40:45.976 "w_mbytes_per_sec": 0 00:40:45.976 }, 00:40:45.976 "block_size": 512, 00:40:45.976 "claimed": false, 00:40:45.976 "driver_specific": { 00:40:45.976 "mp_policy": "active_passive", 00:40:45.976 "nvme": [ 00:40:45.976 { 00:40:45.976 "ctrlr_data": { 00:40:45.976 "ana_reporting": false, 00:40:45.976 "cntlid": 1, 00:40:45.976 "firmware_revision": "24.01.1", 00:40:45.976 "model_number": "SPDK bdev Controller", 00:40:45.976 "multi_ctrlr": true, 00:40:45.976 "oacs": { 00:40:45.976 "firmware": 0, 00:40:45.976 "format": 0, 00:40:45.976 "ns_manage": 0, 00:40:45.976 "security": 0 00:40:45.976 }, 00:40:45.976 "serial_number": "00000000000000000000", 00:40:45.976 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:45.976 "vendor_id": "0x8086" 00:40:45.976 }, 00:40:45.976 "ns_data": { 00:40:45.976 "can_share": true, 00:40:45.976 "id": 1 00:40:45.976 }, 00:40:45.976 "trid": { 00:40:45.976 "adrfam": "IPv4", 00:40:45.976 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:45.976 "traddr": "10.0.0.2", 00:40:45.976 "trsvcid": "4420", 00:40:45.976 "trtype": "TCP" 00:40:45.976 }, 00:40:45.976 "vs": { 00:40:45.976 "nvme_version": "1.3" 00:40:45.976 } 00:40:45.976 } 00:40:45.976 ] 00:40:45.976 }, 00:40:45.976 "name": "nvme0n1", 00:40:45.976 "num_blocks": 2097152, 00:40:45.976 "product_name": "NVMe disk", 00:40:45.976 "supported_io_types": { 00:40:45.976 "abort": true, 00:40:45.976 "compare": true, 00:40:45.976 "compare_and_write": true, 00:40:45.976 "flush": true, 00:40:45.976 "nvme_admin": true, 00:40:45.976 "nvme_io": true, 00:40:45.976 "read": true, 00:40:45.976 "reset": true, 00:40:45.976 "unmap": false, 00:40:45.976 "write": true, 00:40:45.976 "write_zeroes": true 00:40:45.976 }, 00:40:45.976 "uuid": "a10686ac-7804-4131-81a6-ec5ce503951a", 00:40:45.976 "zoned": false 00:40:45.976 } 00:40:45.976 ] 00:40:45.976 08:37:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:45.976 08:37:19 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:40:45.976 08:37:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:45.976 08:37:19 -- common/autotest_common.sh@10 -- # set +x 00:40:45.976 [2024-04-17 08:37:19.234138] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:40:45.976 [2024-04-17 08:37:19.234226] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173c7a0 (9): Bad file descriptor 00:40:46.235 [2024-04-17 08:37:19.366561] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:40:46.235 08:37:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:46.235 08:37:19 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:40:46.235 08:37:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:46.235 08:37:19 -- common/autotest_common.sh@10 -- # set +x 00:40:46.235 [ 00:40:46.235 { 00:40:46.235 "aliases": [ 00:40:46.235 "a10686ac-7804-4131-81a6-ec5ce503951a" 00:40:46.235 ], 00:40:46.235 "assigned_rate_limits": { 00:40:46.235 "r_mbytes_per_sec": 0, 00:40:46.235 "rw_ios_per_sec": 0, 00:40:46.235 "rw_mbytes_per_sec": 0, 00:40:46.235 "w_mbytes_per_sec": 0 00:40:46.235 }, 00:40:46.235 "block_size": 512, 00:40:46.235 "claimed": false, 00:40:46.235 "driver_specific": { 00:40:46.235 "mp_policy": "active_passive", 00:40:46.235 "nvme": [ 00:40:46.235 { 00:40:46.235 "ctrlr_data": { 00:40:46.235 "ana_reporting": false, 00:40:46.235 "cntlid": 2, 00:40:46.235 "firmware_revision": "24.01.1", 00:40:46.235 "model_number": "SPDK bdev Controller", 00:40:46.235 "multi_ctrlr": true, 00:40:46.235 "oacs": { 00:40:46.235 "firmware": 0, 00:40:46.235 "format": 0, 00:40:46.236 "ns_manage": 0, 00:40:46.236 "security": 0 00:40:46.236 }, 00:40:46.236 "serial_number": "00000000000000000000", 00:40:46.236 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:46.236 "vendor_id": "0x8086" 00:40:46.236 }, 00:40:46.236 "ns_data": { 00:40:46.236 "can_share": true, 00:40:46.236 "id": 1 00:40:46.236 }, 00:40:46.236 "trid": { 00:40:46.236 "adrfam": "IPv4", 00:40:46.236 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:46.236 "traddr": "10.0.0.2", 00:40:46.236 "trsvcid": "4420", 00:40:46.236 "trtype": "TCP" 00:40:46.236 }, 00:40:46.236 "vs": { 00:40:46.236 "nvme_version": "1.3" 00:40:46.236 } 00:40:46.236 } 00:40:46.236 ] 00:40:46.236 }, 00:40:46.236 "name": "nvme0n1", 00:40:46.236 "num_blocks": 2097152, 00:40:46.236 "product_name": "NVMe disk", 00:40:46.236 "supported_io_types": { 00:40:46.236 "abort": true, 00:40:46.236 "compare": true, 00:40:46.236 "compare_and_write": true, 00:40:46.236 "flush": true, 00:40:46.236 "nvme_admin": true, 00:40:46.236 "nvme_io": true, 00:40:46.236 "read": true, 00:40:46.236 "reset": true, 00:40:46.236 "unmap": false, 00:40:46.236 "write": true, 00:40:46.236 "write_zeroes": true 00:40:46.236 }, 00:40:46.236 "uuid": "a10686ac-7804-4131-81a6-ec5ce503951a", 00:40:46.236 "zoned": false 00:40:46.236 } 00:40:46.236 ] 00:40:46.236 08:37:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:46.236 08:37:19 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:46.236 08:37:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:46.236 08:37:19 -- common/autotest_common.sh@10 -- # set +x 00:40:46.236 08:37:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:46.236 08:37:19 -- host/async_init.sh@53 -- # mktemp 00:40:46.236 08:37:19 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.0xuwlZysQT 00:40:46.236 08:37:19 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:40:46.236 08:37:19 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.0xuwlZysQT 00:40:46.236 08:37:19 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:40:46.236 08:37:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:46.236 08:37:19 -- common/autotest_common.sh@10 -- # set +x 00:40:46.236 08:37:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:46.236 08:37:19 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:40:46.236 08:37:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:46.236 08:37:19 -- common/autotest_common.sh@10 -- # set +x 00:40:46.236 [2024-04-17 08:37:19.433917] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:40:46.236 [2024-04-17 08:37:19.434072] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:40:46.236 08:37:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:46.236 08:37:19 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0xuwlZysQT 00:40:46.236 08:37:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:46.236 08:37:19 -- common/autotest_common.sh@10 -- # set +x 00:40:46.236 08:37:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:46.236 08:37:19 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0xuwlZysQT 00:40:46.236 08:37:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:46.236 08:37:19 -- common/autotest_common.sh@10 -- # set +x 00:40:46.236 [2024-04-17 08:37:19.453874] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:46.236 nvme0n1 00:40:46.236 08:37:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:46.236 08:37:19 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:40:46.236 08:37:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:46.236 08:37:19 -- common/autotest_common.sh@10 -- # set +x 00:40:46.236 [ 00:40:46.236 { 00:40:46.236 "aliases": [ 00:40:46.236 "a10686ac-7804-4131-81a6-ec5ce503951a" 00:40:46.236 ], 00:40:46.236 "assigned_rate_limits": { 00:40:46.236 "r_mbytes_per_sec": 0, 00:40:46.236 "rw_ios_per_sec": 0, 00:40:46.236 "rw_mbytes_per_sec": 0, 00:40:46.236 "w_mbytes_per_sec": 0 00:40:46.236 }, 00:40:46.236 "block_size": 512, 00:40:46.236 "claimed": false, 00:40:46.236 "driver_specific": { 00:40:46.236 "mp_policy": "active_passive", 00:40:46.236 "nvme": [ 00:40:46.236 { 00:40:46.236 "ctrlr_data": { 00:40:46.236 "ana_reporting": false, 00:40:46.236 "cntlid": 3, 00:40:46.236 "firmware_revision": "24.01.1", 00:40:46.236 "model_number": "SPDK bdev Controller", 00:40:46.236 "multi_ctrlr": true, 00:40:46.236 "oacs": { 00:40:46.236 "firmware": 0, 00:40:46.236 "format": 0, 00:40:46.236 "ns_manage": 0, 00:40:46.236 "security": 0 00:40:46.236 }, 00:40:46.236 "serial_number": "00000000000000000000", 00:40:46.236 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:46.236 "vendor_id": "0x8086" 00:40:46.236 }, 00:40:46.236 "ns_data": { 00:40:46.236 "can_share": true, 00:40:46.236 "id": 1 00:40:46.236 }, 00:40:46.236 "trid": { 00:40:46.236 "adrfam": "IPv4", 00:40:46.236 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:46.236 "traddr": "10.0.0.2", 00:40:46.236 "trsvcid": "4421", 00:40:46.236 "trtype": "TCP" 00:40:46.236 }, 00:40:46.236 "vs": { 00:40:46.236 "nvme_version": "1.3" 00:40:46.236 } 00:40:46.236 } 00:40:46.236 ] 00:40:46.236 }, 00:40:46.236 "name": "nvme0n1", 00:40:46.236 "num_blocks": 2097152, 00:40:46.236 "product_name": "NVMe disk", 00:40:46.236 "supported_io_types": { 00:40:46.236 "abort": true, 00:40:46.236 "compare": true, 00:40:46.236 "compare_and_write": true, 00:40:46.236 "flush": true, 00:40:46.236 "nvme_admin": true, 00:40:46.236 "nvme_io": true, 00:40:46.236 "read": true, 00:40:46.236 "reset": true, 00:40:46.236 "unmap": false, 00:40:46.236 "write": true, 00:40:46.236 "write_zeroes": true 00:40:46.236 }, 00:40:46.236 "uuid": "a10686ac-7804-4131-81a6-ec5ce503951a", 00:40:46.236 "zoned": false 00:40:46.236 } 00:40:46.236 ] 00:40:46.236 08:37:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:46.236 08:37:19 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:46.236 08:37:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:46.236 08:37:19 -- common/autotest_common.sh@10 -- # set +x 00:40:46.236 08:37:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:46.236 08:37:19 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.0xuwlZysQT 00:40:46.236 08:37:19 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:40:46.236 08:37:19 -- host/async_init.sh@78 -- # nvmftestfini 00:40:46.236 08:37:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:40:46.236 08:37:19 -- nvmf/common.sh@116 -- # sync 00:40:46.495 08:37:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:40:46.495 08:37:19 -- nvmf/common.sh@119 -- # set +e 00:40:46.495 08:37:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:40:46.495 08:37:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:40:46.495 rmmod nvme_tcp 00:40:46.495 rmmod nvme_fabrics 00:40:46.495 rmmod nvme_keyring 00:40:46.495 08:37:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:40:46.495 08:37:19 -- nvmf/common.sh@123 -- # set -e 00:40:46.495 08:37:19 -- nvmf/common.sh@124 -- # return 0 00:40:46.495 08:37:19 -- nvmf/common.sh@477 -- # '[' -n 80816 ']' 00:40:46.495 08:37:19 -- nvmf/common.sh@478 -- # killprocess 80816 00:40:46.495 08:37:19 -- common/autotest_common.sh@926 -- # '[' -z 80816 ']' 00:40:46.495 08:37:19 -- common/autotest_common.sh@930 -- # kill -0 80816 00:40:46.495 08:37:19 -- common/autotest_common.sh@931 -- # uname 00:40:46.495 08:37:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:40:46.495 08:37:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 80816 00:40:46.495 killing process with pid 80816 00:40:46.495 08:37:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:40:46.495 08:37:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:40:46.495 08:37:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 80816' 00:40:46.495 08:37:19 -- common/autotest_common.sh@945 -- # kill 80816 00:40:46.495 08:37:19 -- common/autotest_common.sh@950 -- # wait 80816 00:40:46.753 08:37:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:40:46.753 08:37:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:40:46.753 08:37:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:40:46.753 08:37:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:46.753 08:37:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:40:46.753 08:37:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:46.753 08:37:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:46.753 08:37:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:46.753 08:37:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:40:46.753 00:40:46.753 real 0m2.578s 00:40:46.753 user 0m2.273s 00:40:46.753 sys 0m0.667s 00:40:46.753 08:37:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:46.753 08:37:19 -- common/autotest_common.sh@10 -- # set +x 00:40:46.753 ************************************ 00:40:46.753 END TEST nvmf_async_init 00:40:46.753 ************************************ 00:40:46.753 08:37:19 -- nvmf/nvmf.sh@93 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:40:46.753 08:37:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:40:46.753 08:37:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:40:46.753 08:37:19 -- common/autotest_common.sh@10 -- # set +x 00:40:46.754 ************************************ 00:40:46.754 START TEST dma 00:40:46.754 ************************************ 00:40:46.754 08:37:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:40:47.012 * Looking for test storage... 00:40:47.012 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:40:47.012 08:37:20 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:40:47.012 08:37:20 -- nvmf/common.sh@7 -- # uname -s 00:40:47.012 08:37:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:47.012 08:37:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:47.012 08:37:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:47.012 08:37:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:47.012 08:37:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:47.012 08:37:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:47.012 08:37:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:47.012 08:37:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:47.012 08:37:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:47.012 08:37:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:47.012 08:37:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:40:47.012 08:37:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:40:47.012 08:37:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:47.012 08:37:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:47.012 08:37:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:40:47.012 08:37:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:47.012 08:37:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:47.012 08:37:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:47.012 08:37:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:47.013 08:37:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:47.013 08:37:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:47.013 08:37:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:47.013 08:37:20 -- paths/export.sh@5 -- # export PATH 00:40:47.013 08:37:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:47.013 08:37:20 -- nvmf/common.sh@46 -- # : 0 00:40:47.013 08:37:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:40:47.013 08:37:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:40:47.013 08:37:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:40:47.013 08:37:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:47.013 08:37:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:47.013 08:37:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:40:47.013 08:37:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:40:47.013 08:37:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:40:47.013 08:37:20 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:40:47.013 08:37:20 -- host/dma.sh@13 -- # exit 0 00:40:47.013 00:40:47.013 real 0m0.147s 00:40:47.013 user 0m0.078s 00:40:47.013 sys 0m0.077s 00:40:47.013 08:37:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:47.013 08:37:20 -- common/autotest_common.sh@10 -- # set +x 00:40:47.013 ************************************ 00:40:47.013 END TEST dma 00:40:47.013 ************************************ 00:40:47.013 08:37:20 -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:40:47.013 08:37:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:40:47.013 08:37:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:40:47.013 08:37:20 -- common/autotest_common.sh@10 -- # set +x 00:40:47.013 ************************************ 00:40:47.013 START TEST nvmf_identify 00:40:47.013 ************************************ 00:40:47.013 08:37:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:40:47.013 * Looking for test storage... 00:40:47.013 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:40:47.013 08:37:20 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:40:47.013 08:37:20 -- nvmf/common.sh@7 -- # uname -s 00:40:47.013 08:37:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:47.013 08:37:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:47.013 08:37:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:47.013 08:37:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:47.013 08:37:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:47.013 08:37:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:47.013 08:37:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:47.013 08:37:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:47.013 08:37:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:47.272 08:37:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:47.272 08:37:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:40:47.272 08:37:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:40:47.272 08:37:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:47.272 08:37:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:47.272 08:37:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:40:47.272 08:37:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:47.272 08:37:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:47.272 08:37:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:47.272 08:37:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:47.272 08:37:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:47.272 08:37:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:47.272 08:37:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:47.272 08:37:20 -- paths/export.sh@5 -- # export PATH 00:40:47.272 08:37:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:47.272 08:37:20 -- nvmf/common.sh@46 -- # : 0 00:40:47.272 08:37:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:40:47.272 08:37:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:40:47.272 08:37:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:40:47.272 08:37:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:47.272 08:37:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:47.272 08:37:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:40:47.272 08:37:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:40:47.272 08:37:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:40:47.272 08:37:20 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:47.272 08:37:20 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:47.272 08:37:20 -- host/identify.sh@14 -- # nvmftestinit 00:40:47.272 08:37:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:40:47.272 08:37:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:47.272 08:37:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:40:47.272 08:37:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:40:47.272 08:37:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:40:47.272 08:37:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:47.272 08:37:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:47.272 08:37:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:47.272 08:37:20 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:40:47.272 08:37:20 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:40:47.272 08:37:20 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:40:47.272 08:37:20 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:40:47.272 08:37:20 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:40:47.272 08:37:20 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:40:47.272 08:37:20 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:47.272 08:37:20 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:47.272 08:37:20 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:40:47.272 08:37:20 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:40:47.272 08:37:20 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:40:47.272 08:37:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:40:47.272 08:37:20 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:40:47.272 08:37:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:47.272 08:37:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:40:47.272 08:37:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:40:47.272 08:37:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:40:47.272 08:37:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:40:47.272 08:37:20 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:40:47.272 08:37:20 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:40:47.272 Cannot find device "nvmf_tgt_br" 00:40:47.272 08:37:20 -- nvmf/common.sh@154 -- # true 00:40:47.272 08:37:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:40:47.272 Cannot find device "nvmf_tgt_br2" 00:40:47.272 08:37:20 -- nvmf/common.sh@155 -- # true 00:40:47.272 08:37:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:40:47.272 08:37:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:40:47.272 Cannot find device "nvmf_tgt_br" 00:40:47.272 08:37:20 -- nvmf/common.sh@157 -- # true 00:40:47.272 08:37:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:40:47.272 Cannot find device "nvmf_tgt_br2" 00:40:47.272 08:37:20 -- nvmf/common.sh@158 -- # true 00:40:47.272 08:37:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:40:47.272 08:37:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:40:47.272 08:37:20 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:40:47.272 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:47.272 08:37:20 -- nvmf/common.sh@161 -- # true 00:40:47.272 08:37:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:40:47.272 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:47.272 08:37:20 -- nvmf/common.sh@162 -- # true 00:40:47.272 08:37:20 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:40:47.272 08:37:20 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:40:47.273 08:37:20 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:40:47.273 08:37:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:40:47.273 08:37:20 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:40:47.273 08:37:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:40:47.532 08:37:20 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:40:47.532 08:37:20 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:40:47.532 08:37:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:40:47.532 08:37:20 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:40:47.532 08:37:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:40:47.532 08:37:20 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:40:47.532 08:37:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:40:47.532 08:37:20 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:40:47.532 08:37:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:40:47.532 08:37:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:40:47.532 08:37:20 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:40:47.532 08:37:20 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:40:47.532 08:37:20 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:40:47.532 08:37:20 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:40:47.532 08:37:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:40:47.532 08:37:20 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:40:47.532 08:37:20 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:40:47.532 08:37:20 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:40:47.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:47.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:40:47.532 00:40:47.532 --- 10.0.0.2 ping statistics --- 00:40:47.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:47.532 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:40:47.532 08:37:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:40:47.532 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:40:47.532 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.030 ms 00:40:47.532 00:40:47.532 --- 10.0.0.3 ping statistics --- 00:40:47.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:47.532 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:40:47.532 08:37:20 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:40:47.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:47.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:40:47.532 00:40:47.532 --- 10.0.0.1 ping statistics --- 00:40:47.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:47.532 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:40:47.532 08:37:20 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:47.532 08:37:20 -- nvmf/common.sh@421 -- # return 0 00:40:47.532 08:37:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:40:47.532 08:37:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:47.532 08:37:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:40:47.532 08:37:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:40:47.532 08:37:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:47.532 08:37:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:40:47.532 08:37:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:40:47.532 08:37:20 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:40:47.532 08:37:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:40:47.532 08:37:20 -- common/autotest_common.sh@10 -- # set +x 00:40:47.532 08:37:20 -- host/identify.sh@19 -- # nvmfpid=81080 00:40:47.532 08:37:20 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:47.532 08:37:20 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:40:47.532 08:37:20 -- host/identify.sh@23 -- # waitforlisten 81080 00:40:47.532 08:37:20 -- common/autotest_common.sh@819 -- # '[' -z 81080 ']' 00:40:47.532 08:37:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:47.532 08:37:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:40:47.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:47.532 08:37:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:47.532 08:37:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:40:47.532 08:37:20 -- common/autotest_common.sh@10 -- # set +x 00:40:47.532 [2024-04-17 08:37:20.771600] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:40:47.532 [2024-04-17 08:37:20.771669] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:47.792 [2024-04-17 08:37:20.913988] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:47.792 [2024-04-17 08:37:21.019603] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:40:47.792 [2024-04-17 08:37:21.019835] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:47.792 [2024-04-17 08:37:21.019863] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:47.792 [2024-04-17 08:37:21.019925] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:47.792 [2024-04-17 08:37:21.020729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:40:47.792 [2024-04-17 08:37:21.020828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:40:47.792 [2024-04-17 08:37:21.020927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:47.792 [2024-04-17 08:37:21.020931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:40:48.362 08:37:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:40:48.362 08:37:21 -- common/autotest_common.sh@852 -- # return 0 00:40:48.362 08:37:21 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:48.362 08:37:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:48.362 08:37:21 -- common/autotest_common.sh@10 -- # set +x 00:40:48.362 [2024-04-17 08:37:21.657017] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:48.362 08:37:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:48.362 08:37:21 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:40:48.362 08:37:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:40:48.362 08:37:21 -- common/autotest_common.sh@10 -- # set +x 00:40:48.621 08:37:21 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:48.621 08:37:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:48.621 08:37:21 -- common/autotest_common.sh@10 -- # set +x 00:40:48.621 Malloc0 00:40:48.621 08:37:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:48.621 08:37:21 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:48.621 08:37:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:48.621 08:37:21 -- common/autotest_common.sh@10 -- # set +x 00:40:48.621 08:37:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:48.621 08:37:21 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:40:48.621 08:37:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:48.621 08:37:21 -- common/autotest_common.sh@10 -- # set +x 00:40:48.621 08:37:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:48.621 08:37:21 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:48.621 08:37:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:48.621 08:37:21 -- common/autotest_common.sh@10 -- # set +x 00:40:48.621 [2024-04-17 08:37:21.789928] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:48.621 08:37:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:48.621 08:37:21 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:48.621 08:37:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:48.621 08:37:21 -- common/autotest_common.sh@10 -- # set +x 00:40:48.621 08:37:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:48.621 08:37:21 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:40:48.621 08:37:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:48.621 08:37:21 -- common/autotest_common.sh@10 -- # set +x 00:40:48.621 [2024-04-17 08:37:21.813683] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:40:48.621 [ 00:40:48.621 { 00:40:48.621 "allow_any_host": true, 00:40:48.621 "hosts": [], 00:40:48.621 "listen_addresses": [ 00:40:48.621 { 00:40:48.622 "adrfam": "IPv4", 00:40:48.622 "traddr": "10.0.0.2", 00:40:48.622 "transport": "TCP", 00:40:48.622 "trsvcid": "4420", 00:40:48.622 "trtype": "TCP" 00:40:48.622 } 00:40:48.622 ], 00:40:48.622 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:40:48.622 "subtype": "Discovery" 00:40:48.622 }, 00:40:48.622 { 00:40:48.622 "allow_any_host": true, 00:40:48.622 "hosts": [], 00:40:48.622 "listen_addresses": [ 00:40:48.622 { 00:40:48.622 "adrfam": "IPv4", 00:40:48.622 "traddr": "10.0.0.2", 00:40:48.622 "transport": "TCP", 00:40:48.622 "trsvcid": "4420", 00:40:48.622 "trtype": "TCP" 00:40:48.622 } 00:40:48.622 ], 00:40:48.622 "max_cntlid": 65519, 00:40:48.622 "max_namespaces": 32, 00:40:48.622 "min_cntlid": 1, 00:40:48.622 "model_number": "SPDK bdev Controller", 00:40:48.622 "namespaces": [ 00:40:48.622 { 00:40:48.622 "bdev_name": "Malloc0", 00:40:48.622 "eui64": "ABCDEF0123456789", 00:40:48.622 "name": "Malloc0", 00:40:48.622 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:40:48.622 "nsid": 1, 00:40:48.622 "uuid": "23503401-aa1c-4f96-9216-ad21c4abdc3c" 00:40:48.622 } 00:40:48.622 ], 00:40:48.622 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:40:48.622 "serial_number": "SPDK00000000000001", 00:40:48.622 "subtype": "NVMe" 00:40:48.622 } 00:40:48.622 ] 00:40:48.622 08:37:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:48.622 08:37:21 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:40:48.622 [2024-04-17 08:37:21.859313] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:40:48.622 [2024-04-17 08:37:21.859454] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81136 ] 00:40:48.884 [2024-04-17 08:37:21.992697] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:40:48.884 [2024-04-17 08:37:21.992799] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:40:48.884 [2024-04-17 08:37:21.992805] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:40:48.884 [2024-04-17 08:37:21.992816] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:40:48.884 [2024-04-17 08:37:21.992825] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:40:48.884 [2024-04-17 08:37:21.992955] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:40:48.884 [2024-04-17 08:37:21.992991] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x142a270 0 00:40:48.884 [2024-04-17 08:37:21.998570] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:40:48.884 [2024-04-17 08:37:21.998596] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:40:48.884 [2024-04-17 08:37:21.998600] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:40:48.884 [2024-04-17 08:37:21.998603] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:40:48.884 [2024-04-17 08:37:21.998651] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.884 [2024-04-17 08:37:21.998658] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.884 [2024-04-17 08:37:21.998661] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x142a270) 00:40:48.884 [2024-04-17 08:37:21.998675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:40:48.884 [2024-04-17 08:37:21.998702] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14696d0, cid 0, qid 0 00:40:48.884 [2024-04-17 08:37:22.006418] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.884 [2024-04-17 08:37:22.006444] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.884 [2024-04-17 08:37:22.006449] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.884 [2024-04-17 08:37:22.006453] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14696d0) on tqpair=0x142a270 00:40:48.884 [2024-04-17 08:37:22.006467] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:40:48.884 [2024-04-17 08:37:22.006475] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:40:48.884 [2024-04-17 08:37:22.006479] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:40:48.884 [2024-04-17 08:37:22.006497] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.884 [2024-04-17 08:37:22.006501] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.884 [2024-04-17 08:37:22.006504] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x142a270) 00:40:48.884 [2024-04-17 08:37:22.006514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.884 [2024-04-17 08:37:22.006541] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14696d0, cid 0, qid 0 00:40:48.884 [2024-04-17 08:37:22.006630] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.884 [2024-04-17 08:37:22.006639] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.884 [2024-04-17 08:37:22.006642] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.884 [2024-04-17 08:37:22.006645] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14696d0) on tqpair=0x142a270 00:40:48.884 [2024-04-17 08:37:22.006654] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:40:48.884 [2024-04-17 08:37:22.006660] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:40:48.884 [2024-04-17 08:37:22.006667] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.884 [2024-04-17 08:37:22.006670] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.884 [2024-04-17 08:37:22.006673] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x142a270) 00:40:48.884 [2024-04-17 08:37:22.006680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.884 [2024-04-17 08:37:22.006698] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14696d0, cid 0, qid 0 00:40:48.884 [2024-04-17 08:37:22.006744] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.884 [2024-04-17 08:37:22.006749] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.884 [2024-04-17 08:37:22.006751] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.884 [2024-04-17 08:37:22.006754] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14696d0) on tqpair=0x142a270 00:40:48.884 [2024-04-17 08:37:22.006759] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:40:48.884 [2024-04-17 08:37:22.006766] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:40:48.884 [2024-04-17 08:37:22.006771] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.884 [2024-04-17 08:37:22.006774] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.884 [2024-04-17 08:37:22.006777] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x142a270) 00:40:48.884 [2024-04-17 08:37:22.006782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.884 [2024-04-17 08:37:22.006796] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14696d0, cid 0, qid 0 00:40:48.884 [2024-04-17 08:37:22.006854] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.884 [2024-04-17 08:37:22.006863] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.884 [2024-04-17 08:37:22.006866] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.884 [2024-04-17 08:37:22.006869] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14696d0) on tqpair=0x142a270 00:40:48.884 [2024-04-17 08:37:22.006874] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:40:48.884 [2024-04-17 08:37:22.006881] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.884 [2024-04-17 08:37:22.006885] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.884 [2024-04-17 08:37:22.006888] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x142a270) 00:40:48.884 [2024-04-17 08:37:22.006894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.884 [2024-04-17 08:37:22.006911] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14696d0, cid 0, qid 0 00:40:48.884 [2024-04-17 08:37:22.006966] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.884 [2024-04-17 08:37:22.006975] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.884 [2024-04-17 08:37:22.006979] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.884 [2024-04-17 08:37:22.006984] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14696d0) on tqpair=0x142a270 00:40:48.884 [2024-04-17 08:37:22.006989] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:40:48.884 [2024-04-17 08:37:22.006993] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:40:48.884 [2024-04-17 08:37:22.006999] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:40:48.884 [2024-04-17 08:37:22.007102] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:40:48.884 [2024-04-17 08:37:22.007126] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:40:48.884 [2024-04-17 08:37:22.007134] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.884 [2024-04-17 08:37:22.007137] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.884 [2024-04-17 08:37:22.007140] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x142a270) 00:40:48.884 [2024-04-17 08:37:22.007146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.884 [2024-04-17 08:37:22.007162] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14696d0, cid 0, qid 0 00:40:48.884 [2024-04-17 08:37:22.007216] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.884 [2024-04-17 08:37:22.007225] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.884 [2024-04-17 08:37:22.007228] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.884 [2024-04-17 08:37:22.007231] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14696d0) on tqpair=0x142a270 00:40:48.884 [2024-04-17 08:37:22.007235] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:40:48.884 [2024-04-17 08:37:22.007243] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.884 [2024-04-17 08:37:22.007246] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.884 [2024-04-17 08:37:22.007249] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x142a270) 00:40:48.884 [2024-04-17 08:37:22.007255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.884 [2024-04-17 08:37:22.007269] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14696d0, cid 0, qid 0 00:40:48.884 [2024-04-17 08:37:22.007321] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.884 [2024-04-17 08:37:22.007329] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.884 [2024-04-17 08:37:22.007334] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.885 [2024-04-17 08:37:22.007338] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14696d0) on tqpair=0x142a270 00:40:48.885 [2024-04-17 08:37:22.007345] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:40:48.885 [2024-04-17 08:37:22.007351] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:40:48.885 [2024-04-17 08:37:22.007359] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:40:48.885 [2024-04-17 08:37:22.007373] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:40:48.885 [2024-04-17 08:37:22.007385] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.885 [2024-04-17 08:37:22.007390] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.885 [2024-04-17 08:37:22.007408] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x142a270) 00:40:48.885 [2024-04-17 08:37:22.007415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.885 [2024-04-17 08:37:22.007434] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14696d0, cid 0, qid 0 00:40:48.885 [2024-04-17 08:37:22.007516] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:40:48.885 [2024-04-17 08:37:22.007523] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:40:48.885 [2024-04-17 08:37:22.007526] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:40:48.885 [2024-04-17 08:37:22.007529] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x142a270): datao=0, datal=4096, cccid=0 00:40:48.885 [2024-04-17 08:37:22.007533] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14696d0) on tqpair(0x142a270): expected_datao=0, payload_size=4096 00:40:48.885 [2024-04-17 08:37:22.007542] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:40:48.885 [2024-04-17 08:37:22.007545] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:40:48.885 [2024-04-17 08:37:22.007553] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.885 [2024-04-17 08:37:22.007558] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.885 [2024-04-17 08:37:22.007560] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.885 [2024-04-17 08:37:22.007563] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14696d0) on tqpair=0x142a270 00:40:48.885 [2024-04-17 08:37:22.007571] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:40:48.885 [2024-04-17 08:37:22.007579] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:40:48.885 [2024-04-17 08:37:22.007583] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:40:48.885 [2024-04-17 08:37:22.007587] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:40:48.885 [2024-04-17 08:37:22.007590] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:40:48.885 [2024-04-17 08:37:22.007594] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:40:48.885 [2024-04-17 08:37:22.007600] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:40:48.885 [2024-04-17 08:37:22.007606] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.885 [2024-04-17 08:37:22.007609] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.885 [2024-04-17 08:37:22.007612] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x142a270) 00:40:48.885 [2024-04-17 08:37:22.007618] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:48.885 [2024-04-17 08:37:22.007632] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14696d0, cid 0, qid 0 00:40:48.885 [2024-04-17 08:37:22.007696] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.885 [2024-04-17 08:37:22.007703] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.885 [2024-04-17 08:37:22.007705] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.885 [2024-04-17 08:37:22.007708] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14696d0) on tqpair=0x142a270 00:40:48.885 [2024-04-17 08:37:22.007715] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.885 [2024-04-17 08:37:22.007718] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.885 [2024-04-17 08:37:22.007721] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x142a270) 00:40:48.885 [2024-04-17 08:37:22.007726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:40:48.885 [2024-04-17 08:37:22.007731] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.885 [2024-04-17 08:37:22.007734] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.885 [2024-04-17 08:37:22.007736] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x142a270) 00:40:48.885 [2024-04-17 08:37:22.007741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:48.885 [2024-04-17 08:37:22.007747] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.885 [2024-04-17 08:37:22.007750] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.885 [2024-04-17 08:37:22.007753] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x142a270) 00:40:48.885 [2024-04-17 08:37:22.007757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:48.885 [2024-04-17 08:37:22.007762] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.885 [2024-04-17 08:37:22.007765] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.885 [2024-04-17 08:37:22.007767] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142a270) 00:40:48.885 [2024-04-17 08:37:22.007772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:48.885 [2024-04-17 08:37:22.007775] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:40:48.885 [2024-04-17 08:37:22.007784] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:40:48.885 [2024-04-17 08:37:22.007790] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.885 [2024-04-17 08:37:22.007792] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.885 [2024-04-17 08:37:22.007795] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x142a270) 00:40:48.885 [2024-04-17 08:37:22.007801] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.885 [2024-04-17 08:37:22.007815] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14696d0, cid 0, qid 0 00:40:48.885 [2024-04-17 08:37:22.007820] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469830, cid 1, qid 0 00:40:48.885 [2024-04-17 08:37:22.007824] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469990, cid 2, qid 0 00:40:48.885 [2024-04-17 08:37:22.007828] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469af0, cid 3, qid 0 00:40:48.885 [2024-04-17 08:37:22.007831] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469c50, cid 4, qid 0 00:40:48.885 [2024-04-17 08:37:22.007941] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.885 [2024-04-17 08:37:22.007950] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.885 [2024-04-17 08:37:22.007953] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.885 [2024-04-17 08:37:22.007956] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469c50) on tqpair=0x142a270 00:40:48.885 [2024-04-17 08:37:22.007961] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:40:48.885 [2024-04-17 08:37:22.007965] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:40:48.885 [2024-04-17 08:37:22.007974] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.885 [2024-04-17 08:37:22.007977] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.885 [2024-04-17 08:37:22.007980] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x142a270) 00:40:48.885 [2024-04-17 08:37:22.007985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.885 [2024-04-17 08:37:22.007998] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469c50, cid 4, qid 0 00:40:48.885 [2024-04-17 08:37:22.008056] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:40:48.885 [2024-04-17 08:37:22.008062] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:40:48.885 [2024-04-17 08:37:22.008064] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:40:48.885 [2024-04-17 08:37:22.008067] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x142a270): datao=0, datal=4096, cccid=4 00:40:48.885 [2024-04-17 08:37:22.008070] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1469c50) on tqpair(0x142a270): expected_datao=0, payload_size=4096 00:40:48.885 [2024-04-17 08:37:22.008077] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:40:48.885 [2024-04-17 08:37:22.008079] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:40:48.885 [2024-04-17 08:37:22.008086] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.885 [2024-04-17 08:37:22.008091] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.885 [2024-04-17 08:37:22.008093] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.885 [2024-04-17 08:37:22.008096] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469c50) on tqpair=0x142a270 00:40:48.885 [2024-04-17 08:37:22.008106] nvme_ctrlr.c:4023:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:40:48.885 [2024-04-17 08:37:22.008123] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.885 [2024-04-17 08:37:22.008127] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.885 [2024-04-17 08:37:22.008129] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x142a270) 00:40:48.885 [2024-04-17 08:37:22.008135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.885 [2024-04-17 08:37:22.008140] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.886 [2024-04-17 08:37:22.008143] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.886 [2024-04-17 08:37:22.008146] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x142a270) 00:40:48.886 [2024-04-17 08:37:22.008151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:40:48.886 [2024-04-17 08:37:22.008168] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469c50, cid 4, qid 0 00:40:48.886 [2024-04-17 08:37:22.008172] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469db0, cid 5, qid 0 00:40:48.886 [2024-04-17 08:37:22.008306] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:40:48.886 [2024-04-17 08:37:22.008325] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:40:48.886 [2024-04-17 08:37:22.008330] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:40:48.886 [2024-04-17 08:37:22.008334] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x142a270): datao=0, datal=1024, cccid=4 00:40:48.886 [2024-04-17 08:37:22.008340] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1469c50) on tqpair(0x142a270): expected_datao=0, payload_size=1024 00:40:48.886 [2024-04-17 08:37:22.008348] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:40:48.886 [2024-04-17 08:37:22.008353] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:40:48.886 [2024-04-17 08:37:22.008359] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.886 [2024-04-17 08:37:22.008366] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.886 [2024-04-17 08:37:22.008370] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.886 [2024-04-17 08:37:22.008375] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469db0) on tqpair=0x142a270 00:40:48.886 [2024-04-17 08:37:22.049501] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.886 [2024-04-17 08:37:22.049532] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.886 [2024-04-17 08:37:22.049536] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.886 [2024-04-17 08:37:22.049540] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469c50) on tqpair=0x142a270 00:40:48.886 [2024-04-17 08:37:22.049559] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.886 [2024-04-17 08:37:22.049563] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.886 [2024-04-17 08:37:22.049566] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x142a270) 00:40:48.886 [2024-04-17 08:37:22.049576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.886 [2024-04-17 08:37:22.049608] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469c50, cid 4, qid 0 00:40:48.886 [2024-04-17 08:37:22.049701] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:40:48.886 [2024-04-17 08:37:22.049706] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:40:48.886 [2024-04-17 08:37:22.049709] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:40:48.886 [2024-04-17 08:37:22.049712] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x142a270): datao=0, datal=3072, cccid=4 00:40:48.886 [2024-04-17 08:37:22.049716] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1469c50) on tqpair(0x142a270): expected_datao=0, payload_size=3072 00:40:48.886 [2024-04-17 08:37:22.049723] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:40:48.886 [2024-04-17 08:37:22.049727] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:40:48.886 [2024-04-17 08:37:22.049734] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.886 [2024-04-17 08:37:22.049739] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.886 [2024-04-17 08:37:22.049742] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.886 [2024-04-17 08:37:22.049745] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469c50) on tqpair=0x142a270 00:40:48.886 [2024-04-17 08:37:22.049753] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.886 [2024-04-17 08:37:22.049756] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.886 [2024-04-17 08:37:22.049759] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x142a270) 00:40:48.886 [2024-04-17 08:37:22.049764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.886 [2024-04-17 08:37:22.049782] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469c50, cid 4, qid 0 00:40:48.886 [2024-04-17 08:37:22.049871] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:40:48.886 [2024-04-17 08:37:22.049878] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:40:48.886 [2024-04-17 08:37:22.049880] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:40:48.886 [2024-04-17 08:37:22.049883] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x142a270): datao=0, datal=8, cccid=4 00:40:48.886 [2024-04-17 08:37:22.049886] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1469c50) on tqpair(0x142a270): expected_datao=0, payload_size=8 00:40:48.886 [2024-04-17 08:37:22.049892] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:40:48.886 [2024-04-17 08:37:22.049895] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:40:48.886 ===================================================== 00:40:48.886 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:40:48.886 ===================================================== 00:40:48.886 Controller Capabilities/Features 00:40:48.886 ================================ 00:40:48.886 Vendor ID: 0000 00:40:48.886 Subsystem Vendor ID: 0000 00:40:48.886 Serial Number: .................... 00:40:48.886 Model Number: ........................................ 00:40:48.886 Firmware Version: 24.01.1 00:40:48.886 Recommended Arb Burst: 0 00:40:48.886 IEEE OUI Identifier: 00 00 00 00:40:48.886 Multi-path I/O 00:40:48.886 May have multiple subsystem ports: No 00:40:48.886 May have multiple controllers: No 00:40:48.886 Associated with SR-IOV VF: No 00:40:48.886 Max Data Transfer Size: 131072 00:40:48.886 Max Number of Namespaces: 0 00:40:48.886 Max Number of I/O Queues: 1024 00:40:48.886 NVMe Specification Version (VS): 1.3 00:40:48.886 NVMe Specification Version (Identify): 1.3 00:40:48.886 Maximum Queue Entries: 128 00:40:48.886 Contiguous Queues Required: Yes 00:40:48.886 Arbitration Mechanisms Supported 00:40:48.886 Weighted Round Robin: Not Supported 00:40:48.886 Vendor Specific: Not Supported 00:40:48.886 Reset Timeout: 15000 ms 00:40:48.886 Doorbell Stride: 4 bytes 00:40:48.886 NVM Subsystem Reset: Not Supported 00:40:48.886 Command Sets Supported 00:40:48.886 NVM Command Set: Supported 00:40:48.886 Boot Partition: Not Supported 00:40:48.886 Memory Page Size Minimum: 4096 bytes 00:40:48.886 Memory Page Size Maximum: 4096 bytes 00:40:48.886 Persistent Memory Region: Not Supported 00:40:48.886 Optional Asynchronous Events Supported 00:40:48.886 Namespace Attribute Notices: Not Supported 00:40:48.886 Firmware Activation Notices: Not Supported 00:40:48.886 ANA Change Notices: Not Supported 00:40:48.886 PLE Aggregate Log Change Notices: Not Supported 00:40:48.886 LBA Status Info Alert Notices: Not Supported 00:40:48.886 EGE Aggregate Log Change Notices: Not Supported 00:40:48.886 Normal NVM Subsystem Shutdown event: Not Supported 00:40:48.886 Zone Descriptor Change Notices: Not Supported 00:40:48.886 Discovery Log Change Notices: Supported 00:40:48.886 Controller Attributes 00:40:48.886 128-bit Host Identifier: Not Supported 00:40:48.886 Non-Operational Permissive Mode: Not Supported 00:40:48.886 NVM Sets: Not Supported 00:40:48.886 Read Recovery Levels: Not Supported 00:40:48.886 Endurance Groups: Not Supported 00:40:48.886 Predictable Latency Mode: Not Supported 00:40:48.886 Traffic Based Keep ALive: Not Supported 00:40:48.886 Namespace Granularity: Not Supported 00:40:48.886 SQ Associations: Not Supported 00:40:48.886 UUID List: Not Supported 00:40:48.886 Multi-Domain Subsystem: Not Supported 00:40:48.886 Fixed Capacity Management: Not Supported 00:40:48.886 Variable Capacity Management: Not Supported 00:40:48.886 Delete Endurance Group: Not Supported 00:40:48.886 Delete NVM Set: Not Supported 00:40:48.886 Extended LBA Formats Supported: Not Supported 00:40:48.886 Flexible Data Placement Supported: Not Supported 00:40:48.886 00:40:48.886 Controller Memory Buffer Support 00:40:48.886 ================================ 00:40:48.886 Supported: No 00:40:48.886 00:40:48.886 Persistent Memory Region Support 00:40:48.886 ================================ 00:40:48.886 Supported: No 00:40:48.886 00:40:48.886 Admin Command Set Attributes 00:40:48.886 ============================ 00:40:48.886 Security Send/Receive: Not Supported 00:40:48.886 Format NVM: Not Supported 00:40:48.886 Firmware Activate/Download: Not Supported 00:40:48.886 Namespace Management: Not Supported 00:40:48.886 Device Self-Test: Not Supported 00:40:48.886 Directives: Not Supported 00:40:48.886 NVMe-MI: Not Supported 00:40:48.886 Virtualization Management: Not Supported 00:40:48.886 Doorbell Buffer Config: Not Supported 00:40:48.886 Get LBA Status Capability: Not Supported 00:40:48.886 Command & Feature Lockdown Capability: Not Supported 00:40:48.886 Abort Command Limit: 1 00:40:48.886 Async Event Request Limit: 4 00:40:48.886 Number of Firmware Slots: N/A 00:40:48.886 Firmware Slot 1 Read-Only: N/A 00:40:48.886 Firmware Activation Without Reset: N/A 00:40:48.886 Multiple Update Detection Support: N/A 00:40:48.887 Firmware Update Granularity: No Information Provided 00:40:48.887 Per-Namespace SMART Log: No 00:40:48.887 Asymmetric Namespace Access Log Page: Not Supported 00:40:48.887 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:40:48.887 Command Effects Log Page: Not Supported 00:40:48.887 Get Log Page Extended Data: Supported 00:40:48.887 Telemetry Log Pages: Not Supported 00:40:48.887 Persistent Event Log Pages: Not Supported 00:40:48.887 Supported Log Pages Log Page: May Support 00:40:48.887 Commands Supported & Effects Log Page: Not Supported 00:40:48.887 Feature Identifiers & Effects Log Page:May Support 00:40:48.887 NVMe-MI Commands & Effects Log Page: May Support 00:40:48.887 Data Area 4 for Telemetry Log: Not Supported 00:40:48.887 Error Log Page Entries Supported: 128 00:40:48.887 Keep Alive: Not Supported 00:40:48.887 00:40:48.887 NVM Command Set Attributes 00:40:48.887 ========================== 00:40:48.887 Submission Queue Entry Size 00:40:48.887 Max: 1 00:40:48.887 Min: 1 00:40:48.887 Completion Queue Entry Size 00:40:48.887 Max: 1 00:40:48.887 Min: 1 00:40:48.887 Number of Namespaces: 0 00:40:48.887 Compare Command: Not Supported 00:40:48.887 Write Uncorrectable Command: Not Supported 00:40:48.887 Dataset Management Command: Not Supported 00:40:48.887 Write Zeroes Command: Not Supported 00:40:48.887 Set Features Save Field: Not Supported 00:40:48.887 Reservations: Not Supported 00:40:48.887 Timestamp: Not Supported 00:40:48.887 Copy: Not Supported 00:40:48.887 Volatile Write Cache: Not Present 00:40:48.887 Atomic Write Unit (Normal): 1 00:40:48.887 Atomic Write Unit (PFail): 1 00:40:48.887 Atomic Compare & Write Unit: 1 00:40:48.887 Fused Compare & Write: Supported 00:40:48.887 Scatter-Gather List 00:40:48.887 SGL Command Set: Supported 00:40:48.887 SGL Keyed: Supported 00:40:48.887 SGL Bit Bucket Descriptor: Not Supported 00:40:48.887 SGL Metadata Pointer: Not Supported 00:40:48.887 Oversized SGL: Not Supported 00:40:48.887 SGL Metadata Address: Not Supported 00:40:48.887 SGL Offset: Supported 00:40:48.887 Transport SGL Data Block: Not Supported 00:40:48.887 Replay Protected Memory Block: Not Supported 00:40:48.887 00:40:48.887 Firmware Slot Information 00:40:48.887 ========================= 00:40:48.887 Active slot: 0 00:40:48.887 00:40:48.887 00:40:48.887 Error Log 00:40:48.887 ========= 00:40:48.887 00:40:48.887 Active Namespaces 00:40:48.887 ================= 00:40:48.887 Discovery Log Page 00:40:48.887 ================== 00:40:48.887 Generation Counter: 2 00:40:48.887 Number of Records: 2 00:40:48.887 Record Format: 0 00:40:48.887 00:40:48.887 Discovery Log Entry 0 00:40:48.887 ---------------------- 00:40:48.887 Transport Type: 3 (TCP) 00:40:48.887 Address Family: 1 (IPv4) 00:40:48.887 Subsystem Type: 3 (Current Discovery Subsystem) 00:40:48.887 Entry Flags: 00:40:48.887 Duplicate Returned Information: 1 00:40:48.887 Explicit Persistent Connection Support for Discovery: 1 00:40:48.887 Transport Requirements: 00:40:48.887 Secure Channel: Not Required 00:40:48.887 Port ID: 0 (0x0000) 00:40:48.887 Controller ID: 65535 (0xffff) 00:40:48.887 Admin Max SQ Size: 128 00:40:48.887 Transport Service Identifier: 4420 00:40:48.887 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:40:48.887 Transport Address: 10.0.0.2 00:40:48.887 Discovery Log Entry 1 00:40:48.887 ---------------------- 00:40:48.887 Transport Type: 3 (TCP) 00:40:48.887 Address Family: 1 (IPv4) 00:40:48.887 Subsystem Type: 2 (NVM Subsystem) 00:40:48.887 Entry Flags: 00:40:48.887 Duplicate Returned Information: 0 00:40:48.887 Explicit Persistent Connection Support for Discovery: 0 00:40:48.887 Transport Requirements: 00:40:48.887 Secure Channel: Not Required 00:40:48.887 Port ID: 0 (0x0000) 00:40:48.887 Controller ID: 65535 (0xffff) 00:40:48.887 Admin Max SQ Size: 128 00:40:48.887 Transport Service Identifier: 4420 00:40:48.887 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:40:48.887 Transport Address: 10.0.0.2 [2024-04-17 08:37:22.094435] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.887 [2024-04-17 08:37:22.094465] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.887 [2024-04-17 08:37:22.094469] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.887 [2024-04-17 08:37:22.094473] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469c50) on tqpair=0x142a270 00:40:48.887 [2024-04-17 08:37:22.094581] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:40:48.887 [2024-04-17 08:37:22.094594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:48.887 [2024-04-17 08:37:22.094601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:48.887 [2024-04-17 08:37:22.094606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:48.887 [2024-04-17 08:37:22.094611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:48.887 [2024-04-17 08:37:22.094624] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.887 [2024-04-17 08:37:22.094628] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.887 [2024-04-17 08:37:22.094631] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142a270) 00:40:48.887 [2024-04-17 08:37:22.094639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.887 [2024-04-17 08:37:22.094661] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469af0, cid 3, qid 0 00:40:48.887 [2024-04-17 08:37:22.094727] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.887 [2024-04-17 08:37:22.094736] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.887 [2024-04-17 08:37:22.094740] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.887 [2024-04-17 08:37:22.094745] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469af0) on tqpair=0x142a270 00:40:48.887 [2024-04-17 08:37:22.094755] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.887 [2024-04-17 08:37:22.094760] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.887 [2024-04-17 08:37:22.094764] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142a270) 00:40:48.887 [2024-04-17 08:37:22.094773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.887 [2024-04-17 08:37:22.094800] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469af0, cid 3, qid 0 00:40:48.887 [2024-04-17 08:37:22.094867] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.887 [2024-04-17 08:37:22.094874] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.887 [2024-04-17 08:37:22.094877] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.887 [2024-04-17 08:37:22.094880] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469af0) on tqpair=0x142a270 00:40:48.887 [2024-04-17 08:37:22.094885] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:40:48.887 [2024-04-17 08:37:22.094889] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:40:48.887 [2024-04-17 08:37:22.094897] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.887 [2024-04-17 08:37:22.094900] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.887 [2024-04-17 08:37:22.094903] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142a270) 00:40:48.887 [2024-04-17 08:37:22.094909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.887 [2024-04-17 08:37:22.094922] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469af0, cid 3, qid 0 00:40:48.887 [2024-04-17 08:37:22.095005] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.888 [2024-04-17 08:37:22.095011] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.888 [2024-04-17 08:37:22.095013] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.888 [2024-04-17 08:37:22.095016] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469af0) on tqpair=0x142a270 00:40:48.888 [2024-04-17 08:37:22.095026] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.888 [2024-04-17 08:37:22.095029] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.888 [2024-04-17 08:37:22.095032] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142a270) 00:40:48.888 [2024-04-17 08:37:22.095038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.888 [2024-04-17 08:37:22.095051] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469af0, cid 3, qid 0 00:40:48.888 [2024-04-17 08:37:22.095101] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.888 [2024-04-17 08:37:22.095107] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.888 [2024-04-17 08:37:22.095109] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.888 [2024-04-17 08:37:22.095112] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469af0) on tqpair=0x142a270 00:40:48.888 [2024-04-17 08:37:22.095120] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.888 [2024-04-17 08:37:22.095124] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.888 [2024-04-17 08:37:22.095127] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142a270) 00:40:48.888 [2024-04-17 08:37:22.095132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.888 [2024-04-17 08:37:22.095145] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469af0, cid 3, qid 0 00:40:48.888 [2024-04-17 08:37:22.095194] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.888 [2024-04-17 08:37:22.095200] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.888 [2024-04-17 08:37:22.095202] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.888 [2024-04-17 08:37:22.095205] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469af0) on tqpair=0x142a270 00:40:48.888 [2024-04-17 08:37:22.095213] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.888 [2024-04-17 08:37:22.095216] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.888 [2024-04-17 08:37:22.095219] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142a270) 00:40:48.888 [2024-04-17 08:37:22.095224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.888 [2024-04-17 08:37:22.095237] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469af0, cid 3, qid 0 00:40:48.888 [2024-04-17 08:37:22.095289] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.888 [2024-04-17 08:37:22.095294] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.888 [2024-04-17 08:37:22.095297] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.888 [2024-04-17 08:37:22.095300] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469af0) on tqpair=0x142a270 00:40:48.888 [2024-04-17 08:37:22.095308] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.888 [2024-04-17 08:37:22.095311] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.888 [2024-04-17 08:37:22.095314] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142a270) 00:40:48.888 [2024-04-17 08:37:22.095319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.888 [2024-04-17 08:37:22.095331] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469af0, cid 3, qid 0 00:40:48.888 [2024-04-17 08:37:22.095384] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.888 [2024-04-17 08:37:22.095390] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.888 [2024-04-17 08:37:22.095405] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.888 [2024-04-17 08:37:22.095408] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469af0) on tqpair=0x142a270 00:40:48.888 [2024-04-17 08:37:22.095416] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.888 [2024-04-17 08:37:22.095419] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.888 [2024-04-17 08:37:22.095422] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142a270) 00:40:48.888 [2024-04-17 08:37:22.095428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.888 [2024-04-17 08:37:22.095443] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469af0, cid 3, qid 0 00:40:48.888 [2024-04-17 08:37:22.095495] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.888 [2024-04-17 08:37:22.095500] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.888 [2024-04-17 08:37:22.095503] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.888 [2024-04-17 08:37:22.095506] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469af0) on tqpair=0x142a270 00:40:48.888 [2024-04-17 08:37:22.095514] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.888 [2024-04-17 08:37:22.095517] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.888 [2024-04-17 08:37:22.095520] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142a270) 00:40:48.888 [2024-04-17 08:37:22.095526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.888 [2024-04-17 08:37:22.095538] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469af0, cid 3, qid 0 00:40:48.888 [2024-04-17 08:37:22.095589] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.888 [2024-04-17 08:37:22.095594] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.888 [2024-04-17 08:37:22.095597] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.888 [2024-04-17 08:37:22.095599] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469af0) on tqpair=0x142a270 00:40:48.888 [2024-04-17 08:37:22.095607] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.888 [2024-04-17 08:37:22.095611] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.888 [2024-04-17 08:37:22.095613] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142a270) 00:40:48.888 [2024-04-17 08:37:22.095619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.888 [2024-04-17 08:37:22.095631] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469af0, cid 3, qid 0 00:40:48.888 [2024-04-17 08:37:22.095693] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.888 [2024-04-17 08:37:22.095698] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.888 [2024-04-17 08:37:22.095700] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.888 [2024-04-17 08:37:22.095704] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469af0) on tqpair=0x142a270 00:40:48.888 [2024-04-17 08:37:22.095712] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.888 [2024-04-17 08:37:22.095715] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.888 [2024-04-17 08:37:22.095718] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142a270) 00:40:48.888 [2024-04-17 08:37:22.095723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.888 [2024-04-17 08:37:22.095736] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469af0, cid 3, qid 0 00:40:48.888 [2024-04-17 08:37:22.095795] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.888 [2024-04-17 08:37:22.095804] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.888 [2024-04-17 08:37:22.095808] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.888 [2024-04-17 08:37:22.095812] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469af0) on tqpair=0x142a270 00:40:48.888 [2024-04-17 08:37:22.095823] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.888 [2024-04-17 08:37:22.095829] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.888 [2024-04-17 08:37:22.095833] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142a270) 00:40:48.888 [2024-04-17 08:37:22.095840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.888 [2024-04-17 08:37:22.095858] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469af0, cid 3, qid 0 00:40:48.888 [2024-04-17 08:37:22.095910] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.888 [2024-04-17 08:37:22.095916] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.888 [2024-04-17 08:37:22.095918] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.888 [2024-04-17 08:37:22.095921] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469af0) on tqpair=0x142a270 00:40:48.888 [2024-04-17 08:37:22.095929] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.888 [2024-04-17 08:37:22.095933] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.888 [2024-04-17 08:37:22.095936] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142a270) 00:40:48.888 [2024-04-17 08:37:22.095942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.888 [2024-04-17 08:37:22.095955] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469af0, cid 3, qid 0 00:40:48.888 [2024-04-17 08:37:22.096020] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.888 [2024-04-17 08:37:22.096027] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.888 [2024-04-17 08:37:22.096031] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.888 [2024-04-17 08:37:22.096036] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469af0) on tqpair=0x142a270 00:40:48.888 [2024-04-17 08:37:22.096047] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.888 [2024-04-17 08:37:22.096052] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.888 [2024-04-17 08:37:22.096056] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142a270) 00:40:48.889 [2024-04-17 08:37:22.096065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.889 [2024-04-17 08:37:22.096084] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469af0, cid 3, qid 0 00:40:48.889 [2024-04-17 08:37:22.096144] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.889 [2024-04-17 08:37:22.096154] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.889 [2024-04-17 08:37:22.096159] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.889 [2024-04-17 08:37:22.096164] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469af0) on tqpair=0x142a270 00:40:48.889 [2024-04-17 08:37:22.096176] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.889 [2024-04-17 08:37:22.096181] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.889 [2024-04-17 08:37:22.096186] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142a270) 00:40:48.889 [2024-04-17 08:37:22.096193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.889 [2024-04-17 08:37:22.096208] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469af0, cid 3, qid 0 00:40:48.889 [2024-04-17 08:37:22.096270] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.889 [2024-04-17 08:37:22.096275] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.889 [2024-04-17 08:37:22.096278] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.889 [2024-04-17 08:37:22.096280] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469af0) on tqpair=0x142a270 00:40:48.889 [2024-04-17 08:37:22.096289] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.889 [2024-04-17 08:37:22.096292] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.889 [2024-04-17 08:37:22.096295] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142a270) 00:40:48.889 [2024-04-17 08:37:22.096300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.889 [2024-04-17 08:37:22.096313] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469af0, cid 3, qid 0 00:40:48.889 [2024-04-17 08:37:22.096366] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.889 [2024-04-17 08:37:22.096374] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.889 [2024-04-17 08:37:22.096379] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.889 [2024-04-17 08:37:22.096383] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469af0) on tqpair=0x142a270 00:40:48.889 [2024-04-17 08:37:22.096408] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.889 [2024-04-17 08:37:22.096414] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.889 [2024-04-17 08:37:22.096417] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142a270) 00:40:48.889 [2024-04-17 08:37:22.096425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.889 [2024-04-17 08:37:22.096446] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469af0, cid 3, qid 0 00:40:48.889 [2024-04-17 08:37:22.096496] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.889 [2024-04-17 08:37:22.096503] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.889 [2024-04-17 08:37:22.096508] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.889 [2024-04-17 08:37:22.096513] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469af0) on tqpair=0x142a270 00:40:48.889 [2024-04-17 08:37:22.096524] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.889 [2024-04-17 08:37:22.096529] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.889 [2024-04-17 08:37:22.096533] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142a270) 00:40:48.889 [2024-04-17 08:37:22.096541] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.889 [2024-04-17 08:37:22.096561] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469af0, cid 3, qid 0 00:40:48.889 [2024-04-17 08:37:22.096627] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.889 [2024-04-17 08:37:22.096634] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.889 [2024-04-17 08:37:22.096638] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.889 [2024-04-17 08:37:22.096641] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469af0) on tqpair=0x142a270 00:40:48.889 [2024-04-17 08:37:22.096650] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.889 [2024-04-17 08:37:22.096653] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.889 [2024-04-17 08:37:22.096656] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142a270) 00:40:48.889 [2024-04-17 08:37:22.096662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.889 [2024-04-17 08:37:22.096675] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469af0, cid 3, qid 0 00:40:48.889 [2024-04-17 08:37:22.096736] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.889 [2024-04-17 08:37:22.096742] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.889 [2024-04-17 08:37:22.096744] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.889 [2024-04-17 08:37:22.096747] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469af0) on tqpair=0x142a270 00:40:48.889 [2024-04-17 08:37:22.096755] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.889 [2024-04-17 08:37:22.096758] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.889 [2024-04-17 08:37:22.096761] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142a270) 00:40:48.889 [2024-04-17 08:37:22.096767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.889 [2024-04-17 08:37:22.096780] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469af0, cid 3, qid 0 00:40:48.889 [2024-04-17 08:37:22.096842] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.889 [2024-04-17 08:37:22.096847] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.889 [2024-04-17 08:37:22.096850] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.889 [2024-04-17 08:37:22.096852] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469af0) on tqpair=0x142a270 00:40:48.889 [2024-04-17 08:37:22.096861] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.889 [2024-04-17 08:37:22.096864] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.889 [2024-04-17 08:37:22.096867] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142a270) 00:40:48.889 [2024-04-17 08:37:22.096872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.889 [2024-04-17 08:37:22.096885] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469af0, cid 3, qid 0 00:40:48.889 [2024-04-17 08:37:22.096943] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.889 [2024-04-17 08:37:22.096948] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.889 [2024-04-17 08:37:22.096950] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.889 [2024-04-17 08:37:22.096953] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469af0) on tqpair=0x142a270 00:40:48.889 [2024-04-17 08:37:22.096962] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.889 [2024-04-17 08:37:22.096965] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.889 [2024-04-17 08:37:22.096967] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142a270) 00:40:48.889 [2024-04-17 08:37:22.096973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.890 [2024-04-17 08:37:22.096985] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469af0, cid 3, qid 0 00:40:48.890 [2024-04-17 08:37:22.097037] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.890 [2024-04-17 08:37:22.097042] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.890 [2024-04-17 08:37:22.097045] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.890 [2024-04-17 08:37:22.097048] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469af0) on tqpair=0x142a270 00:40:48.890 [2024-04-17 08:37:22.097056] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.890 [2024-04-17 08:37:22.097060] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.890 [2024-04-17 08:37:22.097062] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142a270) 00:40:48.890 [2024-04-17 08:37:22.097068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.890 [2024-04-17 08:37:22.097080] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469af0, cid 3, qid 0 00:40:48.890 [2024-04-17 08:37:22.097136] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.890 [2024-04-17 08:37:22.097141] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.890 [2024-04-17 08:37:22.097143] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.890 [2024-04-17 08:37:22.097146] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469af0) on tqpair=0x142a270 00:40:48.890 [2024-04-17 08:37:22.097155] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.890 [2024-04-17 08:37:22.097158] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.890 [2024-04-17 08:37:22.097160] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142a270) 00:40:48.890 [2024-04-17 08:37:22.097166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.890 [2024-04-17 08:37:22.097178] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469af0, cid 3, qid 0 00:40:48.890 [2024-04-17 08:37:22.097237] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.890 [2024-04-17 08:37:22.097242] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.890 [2024-04-17 08:37:22.097245] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.890 [2024-04-17 08:37:22.097248] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469af0) on tqpair=0x142a270 00:40:48.890 [2024-04-17 08:37:22.097256] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.890 [2024-04-17 08:37:22.097259] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.890 [2024-04-17 08:37:22.097262] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142a270) 00:40:48.890 [2024-04-17 08:37:22.097267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.890 [2024-04-17 08:37:22.097279] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469af0, cid 3, qid 0 00:40:48.890 [2024-04-17 08:37:22.097341] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.890 [2024-04-17 08:37:22.097346] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.890 [2024-04-17 08:37:22.097350] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.890 [2024-04-17 08:37:22.097354] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469af0) on tqpair=0x142a270 00:40:48.890 [2024-04-17 08:37:22.097364] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.890 [2024-04-17 08:37:22.097369] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.890 [2024-04-17 08:37:22.097374] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142a270) 00:40:48.890 [2024-04-17 08:37:22.097382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.890 [2024-04-17 08:37:22.097409] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469af0, cid 3, qid 0 00:40:48.890 [2024-04-17 08:37:22.097465] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.890 [2024-04-17 08:37:22.097470] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.890 [2024-04-17 08:37:22.097473] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.890 [2024-04-17 08:37:22.097476] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469af0) on tqpair=0x142a270 00:40:48.890 [2024-04-17 08:37:22.097484] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.890 [2024-04-17 08:37:22.097488] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.890 [2024-04-17 08:37:22.097490] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142a270) 00:40:48.890 [2024-04-17 08:37:22.097496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.890 [2024-04-17 08:37:22.097509] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469af0, cid 3, qid 0 00:40:48.890 [2024-04-17 08:37:22.097570] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.890 [2024-04-17 08:37:22.097583] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.890 [2024-04-17 08:37:22.097588] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.890 [2024-04-17 08:37:22.097592] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469af0) on tqpair=0x142a270 00:40:48.890 [2024-04-17 08:37:22.097604] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.890 [2024-04-17 08:37:22.097609] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.890 [2024-04-17 08:37:22.097613] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142a270) 00:40:48.890 [2024-04-17 08:37:22.097621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.890 [2024-04-17 08:37:22.097641] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469af0, cid 3, qid 0 00:40:48.890 [2024-04-17 08:37:22.097693] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.890 [2024-04-17 08:37:22.097699] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.890 [2024-04-17 08:37:22.097701] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.890 [2024-04-17 08:37:22.097704] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469af0) on tqpair=0x142a270 00:40:48.890 [2024-04-17 08:37:22.097713] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.890 [2024-04-17 08:37:22.097716] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.890 [2024-04-17 08:37:22.097719] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142a270) 00:40:48.890 [2024-04-17 08:37:22.097724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.890 [2024-04-17 08:37:22.097737] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469af0, cid 3, qid 0 00:40:48.890 [2024-04-17 08:37:22.097801] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.890 [2024-04-17 08:37:22.097807] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.890 [2024-04-17 08:37:22.097811] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.890 [2024-04-17 08:37:22.097816] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469af0) on tqpair=0x142a270 00:40:48.890 [2024-04-17 08:37:22.097827] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.890 [2024-04-17 08:37:22.097832] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.890 [2024-04-17 08:37:22.097836] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142a270) 00:40:48.890 [2024-04-17 08:37:22.097844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.890 [2024-04-17 08:37:22.097874] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469af0, cid 3, qid 0 00:40:48.890 [2024-04-17 08:37:22.097928] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.890 [2024-04-17 08:37:22.097935] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.890 [2024-04-17 08:37:22.097939] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.890 [2024-04-17 08:37:22.097944] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469af0) on tqpair=0x142a270 00:40:48.890 [2024-04-17 08:37:22.097955] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.890 [2024-04-17 08:37:22.097960] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.890 [2024-04-17 08:37:22.097965] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142a270) 00:40:48.890 [2024-04-17 08:37:22.097974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.890 [2024-04-17 08:37:22.097994] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469af0, cid 3, qid 0 00:40:48.890 [2024-04-17 08:37:22.098043] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.890 [2024-04-17 08:37:22.098048] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.890 [2024-04-17 08:37:22.098051] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.890 [2024-04-17 08:37:22.098054] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469af0) on tqpair=0x142a270 00:40:48.890 [2024-04-17 08:37:22.098062] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.890 [2024-04-17 08:37:22.098066] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.890 [2024-04-17 08:37:22.098068] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142a270) 00:40:48.890 [2024-04-17 08:37:22.098074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.890 [2024-04-17 08:37:22.098086] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469af0, cid 3, qid 0 00:40:48.890 [2024-04-17 08:37:22.098145] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.890 [2024-04-17 08:37:22.098155] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.890 [2024-04-17 08:37:22.098158] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.890 [2024-04-17 08:37:22.098161] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469af0) on tqpair=0x142a270 00:40:48.890 [2024-04-17 08:37:22.098171] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.890 [2024-04-17 08:37:22.098175] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.891 [2024-04-17 08:37:22.098180] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142a270) 00:40:48.891 [2024-04-17 08:37:22.098187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.891 [2024-04-17 08:37:22.098205] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469af0, cid 3, qid 0 00:40:48.891 [2024-04-17 08:37:22.098254] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.891 [2024-04-17 08:37:22.098259] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.891 [2024-04-17 08:37:22.098262] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.891 [2024-04-17 08:37:22.098267] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469af0) on tqpair=0x142a270 00:40:48.891 [2024-04-17 08:37:22.098279] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.891 [2024-04-17 08:37:22.098283] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.891 [2024-04-17 08:37:22.098287] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142a270) 00:40:48.891 [2024-04-17 08:37:22.098295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.891 [2024-04-17 08:37:22.098315] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469af0, cid 3, qid 0 00:40:48.891 [2024-04-17 08:37:22.098381] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.891 [2024-04-17 08:37:22.102411] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.891 [2024-04-17 08:37:22.102431] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.891 [2024-04-17 08:37:22.102436] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469af0) on tqpair=0x142a270 00:40:48.891 [2024-04-17 08:37:22.102450] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:48.891 [2024-04-17 08:37:22.102453] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:48.891 [2024-04-17 08:37:22.102456] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142a270) 00:40:48.891 [2024-04-17 08:37:22.102464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:48.891 [2024-04-17 08:37:22.102489] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1469af0, cid 3, qid 0 00:40:48.891 [2024-04-17 08:37:22.102543] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:48.891 [2024-04-17 08:37:22.102548] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:48.891 [2024-04-17 08:37:22.102551] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:48.891 [2024-04-17 08:37:22.102554] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1469af0) on tqpair=0x142a270 00:40:48.891 [2024-04-17 08:37:22.102561] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:40:48.891 00:40:48.891 08:37:22 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:40:48.891 [2024-04-17 08:37:22.144359] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:40:48.891 [2024-04-17 08:37:22.144424] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81138 ] 00:40:49.153 [2024-04-17 08:37:22.284000] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:40:49.153 [2024-04-17 08:37:22.284078] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:40:49.153 [2024-04-17 08:37:22.284084] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:40:49.153 [2024-04-17 08:37:22.284095] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:40:49.153 [2024-04-17 08:37:22.284104] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:40:49.153 [2024-04-17 08:37:22.284226] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:40:49.153 [2024-04-17 08:37:22.284263] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1a5c270 0 00:40:49.153 [2024-04-17 08:37:22.291413] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:40:49.153 [2024-04-17 08:37:22.291447] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:40:49.153 [2024-04-17 08:37:22.291452] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:40:49.153 [2024-04-17 08:37:22.291455] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:40:49.153 [2024-04-17 08:37:22.291503] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:49.153 [2024-04-17 08:37:22.291509] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:49.153 [2024-04-17 08:37:22.291512] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5c270) 00:40:49.153 [2024-04-17 08:37:22.291525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:40:49.153 [2024-04-17 08:37:22.291559] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9b6d0, cid 0, qid 0 00:40:49.153 [2024-04-17 08:37:22.299427] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:49.153 [2024-04-17 08:37:22.299459] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:49.153 [2024-04-17 08:37:22.299463] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:49.153 [2024-04-17 08:37:22.299467] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9b6d0) on tqpair=0x1a5c270 00:40:49.153 [2024-04-17 08:37:22.299479] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:40:49.153 [2024-04-17 08:37:22.299487] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:40:49.153 [2024-04-17 08:37:22.299493] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:40:49.153 [2024-04-17 08:37:22.299514] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:49.153 [2024-04-17 08:37:22.299518] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:49.153 [2024-04-17 08:37:22.299521] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5c270) 00:40:49.153 [2024-04-17 08:37:22.299533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.153 [2024-04-17 08:37:22.299565] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9b6d0, cid 0, qid 0 00:40:49.153 [2024-04-17 08:37:22.299628] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:49.153 [2024-04-17 08:37:22.299634] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:49.153 [2024-04-17 08:37:22.299637] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:49.153 [2024-04-17 08:37:22.299640] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9b6d0) on tqpair=0x1a5c270 00:40:49.153 [2024-04-17 08:37:22.299647] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:40:49.153 [2024-04-17 08:37:22.299653] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:40:49.153 [2024-04-17 08:37:22.299659] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:49.153 [2024-04-17 08:37:22.299662] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:49.153 [2024-04-17 08:37:22.299665] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5c270) 00:40:49.153 [2024-04-17 08:37:22.299670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.153 [2024-04-17 08:37:22.299684] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9b6d0, cid 0, qid 0 00:40:49.153 [2024-04-17 08:37:22.299738] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:49.153 [2024-04-17 08:37:22.299746] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:49.153 [2024-04-17 08:37:22.299750] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:49.153 [2024-04-17 08:37:22.299755] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9b6d0) on tqpair=0x1a5c270 00:40:49.153 [2024-04-17 08:37:22.299763] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:40:49.153 [2024-04-17 08:37:22.299772] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:40:49.153 [2024-04-17 08:37:22.299777] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:49.153 [2024-04-17 08:37:22.299780] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:49.153 [2024-04-17 08:37:22.299783] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5c270) 00:40:49.153 [2024-04-17 08:37:22.299789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.153 [2024-04-17 08:37:22.299805] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9b6d0, cid 0, qid 0 00:40:49.153 [2024-04-17 08:37:22.299853] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:49.153 [2024-04-17 08:37:22.299864] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:49.153 [2024-04-17 08:37:22.299867] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:49.153 [2024-04-17 08:37:22.299870] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9b6d0) on tqpair=0x1a5c270 00:40:49.153 [2024-04-17 08:37:22.299875] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:40:49.153 [2024-04-17 08:37:22.299883] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:49.153 [2024-04-17 08:37:22.299886] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:49.153 [2024-04-17 08:37:22.299889] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5c270) 00:40:49.153 [2024-04-17 08:37:22.299894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.153 [2024-04-17 08:37:22.299908] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9b6d0, cid 0, qid 0 00:40:49.153 [2024-04-17 08:37:22.299996] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:49.153 [2024-04-17 08:37:22.300005] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:49.153 [2024-04-17 08:37:22.300009] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:49.153 [2024-04-17 08:37:22.300014] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9b6d0) on tqpair=0x1a5c270 00:40:49.153 [2024-04-17 08:37:22.300020] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:40:49.153 [2024-04-17 08:37:22.300024] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:40:49.153 [2024-04-17 08:37:22.300030] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:40:49.154 [2024-04-17 08:37:22.300135] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:40:49.154 [2024-04-17 08:37:22.300142] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:40:49.154 [2024-04-17 08:37:22.300150] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:49.154 [2024-04-17 08:37:22.300153] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:49.154 [2024-04-17 08:37:22.300155] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5c270) 00:40:49.154 [2024-04-17 08:37:22.300162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.154 [2024-04-17 08:37:22.300181] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9b6d0, cid 0, qid 0 00:40:49.154 [2024-04-17 08:37:22.300235] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:49.154 [2024-04-17 08:37:22.300244] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:49.154 [2024-04-17 08:37:22.300248] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:49.154 [2024-04-17 08:37:22.300253] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9b6d0) on tqpair=0x1a5c270 00:40:49.154 [2024-04-17 08:37:22.300260] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:40:49.154 [2024-04-17 08:37:22.300271] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:49.154 [2024-04-17 08:37:22.300277] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:49.154 [2024-04-17 08:37:22.300281] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5c270) 00:40:49.154 [2024-04-17 08:37:22.300290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.154 [2024-04-17 08:37:22.300308] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9b6d0, cid 0, qid 0 00:40:49.154 [2024-04-17 08:37:22.300358] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:49.154 [2024-04-17 08:37:22.300367] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:49.154 [2024-04-17 08:37:22.300372] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:49.154 [2024-04-17 08:37:22.300377] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9b6d0) on tqpair=0x1a5c270 00:40:49.154 [2024-04-17 08:37:22.300383] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:40:49.154 [2024-04-17 08:37:22.300390] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:40:49.154 [2024-04-17 08:37:22.300415] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:40:49.154 [2024-04-17 08:37:22.300428] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:40:49.154 [2024-04-17 08:37:22.300440] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:49.154 [2024-04-17 08:37:22.300445] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:49.154 [2024-04-17 08:37:22.300450] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5c270) 00:40:49.154 [2024-04-17 08:37:22.300459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.154 [2024-04-17 08:37:22.300483] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9b6d0, cid 0, qid 0 00:40:49.154 [2024-04-17 08:37:22.300601] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:40:49.154 [2024-04-17 08:37:22.300619] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:40:49.154 [2024-04-17 08:37:22.300624] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:40:49.154 [2024-04-17 08:37:22.300629] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a5c270): datao=0, datal=4096, cccid=0 00:40:49.154 [2024-04-17 08:37:22.300635] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a9b6d0) on tqpair(0x1a5c270): expected_datao=0, payload_size=4096 00:40:49.154 [2024-04-17 08:37:22.300647] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:40:49.154 [2024-04-17 08:37:22.300652] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:40:49.154 [2024-04-17 08:37:22.300663] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:49.154 [2024-04-17 08:37:22.300670] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:49.154 [2024-04-17 08:37:22.300674] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:49.154 [2024-04-17 08:37:22.300680] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9b6d0) on tqpair=0x1a5c270 00:40:49.154 [2024-04-17 08:37:22.300691] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:40:49.154 [2024-04-17 08:37:22.300702] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:40:49.154 [2024-04-17 08:37:22.300708] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:40:49.154 [2024-04-17 08:37:22.300714] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:40:49.154 [2024-04-17 08:37:22.300720] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:40:49.154 [2024-04-17 08:37:22.300726] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:40:49.154 [2024-04-17 08:37:22.300737] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:40:49.154 [2024-04-17 08:37:22.300745] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:49.154 [2024-04-17 08:37:22.300750] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:49.154 [2024-04-17 08:37:22.300753] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5c270) 00:40:49.154 [2024-04-17 08:37:22.300760] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:49.154 [2024-04-17 08:37:22.300777] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9b6d0, cid 0, qid 0 00:40:49.154 [2024-04-17 08:37:22.300832] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:49.154 [2024-04-17 08:37:22.300837] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:49.154 [2024-04-17 08:37:22.300840] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:49.154 [2024-04-17 08:37:22.300843] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9b6d0) on tqpair=0x1a5c270 00:40:49.154 [2024-04-17 08:37:22.300850] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:49.154 [2024-04-17 08:37:22.300852] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:49.154 [2024-04-17 08:37:22.300855] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5c270) 00:40:49.154 [2024-04-17 08:37:22.300861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:40:49.154 [2024-04-17 08:37:22.300867] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:49.154 [2024-04-17 08:37:22.300869] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:49.154 [2024-04-17 08:37:22.300872] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1a5c270) 00:40:49.154 [2024-04-17 08:37:22.300877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:49.154 [2024-04-17 08:37:22.300882] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:49.154 [2024-04-17 08:37:22.300885] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:49.154 [2024-04-17 08:37:22.300888] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1a5c270) 00:40:49.154 [2024-04-17 08:37:22.300892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:49.154 [2024-04-17 08:37:22.300897] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:49.154 [2024-04-17 08:37:22.300900] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:49.154 [2024-04-17 08:37:22.300902] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5c270) 00:40:49.154 [2024-04-17 08:37:22.300907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:49.154 [2024-04-17 08:37:22.300911] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:40:49.154 [2024-04-17 08:37:22.300920] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:40:49.154 [2024-04-17 08:37:22.300927] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:49.154 [2024-04-17 08:37:22.300930] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:49.154 [2024-04-17 08:37:22.300932] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a5c270) 00:40:49.154 [2024-04-17 08:37:22.300938] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.154 [2024-04-17 08:37:22.300953] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9b6d0, cid 0, qid 0 00:40:49.154 [2024-04-17 08:37:22.300958] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9b830, cid 1, qid 0 00:40:49.154 [2024-04-17 08:37:22.300962] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9b990, cid 2, qid 0 00:40:49.154 [2024-04-17 08:37:22.300966] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9baf0, cid 3, qid 0 00:40:49.154 [2024-04-17 08:37:22.300969] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9bc50, cid 4, qid 0 00:40:49.154 [2024-04-17 08:37:22.301069] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:49.154 [2024-04-17 08:37:22.301074] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:49.154 [2024-04-17 08:37:22.301078] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:49.154 [2024-04-17 08:37:22.301080] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9bc50) on tqpair=0x1a5c270 00:40:49.154 [2024-04-17 08:37:22.301085] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:40:49.154 [2024-04-17 08:37:22.301089] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:40:49.154 [2024-04-17 08:37:22.301096] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:40:49.154 [2024-04-17 08:37:22.301101] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:40:49.154 [2024-04-17 08:37:22.301105] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:49.154 [2024-04-17 08:37:22.301108] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:49.154 [2024-04-17 08:37:22.301111] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a5c270) 00:40:49.154 [2024-04-17 08:37:22.301117] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:49.154 [2024-04-17 08:37:22.301129] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9bc50, cid 4, qid 0 00:40:49.154 [2024-04-17 08:37:22.301185] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:49.155 [2024-04-17 08:37:22.301190] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:49.155 [2024-04-17 08:37:22.301193] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:49.155 [2024-04-17 08:37:22.301196] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9bc50) on tqpair=0x1a5c270 00:40:49.155 [2024-04-17 08:37:22.301247] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:40:49.155 [2024-04-17 08:37:22.301255] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:40:49.155 [2024-04-17 08:37:22.301261] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:49.155 [2024-04-17 08:37:22.301264] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:49.155 [2024-04-17 08:37:22.301267] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a5c270) 00:40:49.155 [2024-04-17 08:37:22.301272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.155 [2024-04-17 08:37:22.301285] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9bc50, cid 4, qid 0 00:40:49.155 [2024-04-17 08:37:22.301341] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:40:49.155 [2024-04-17 08:37:22.301347] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:40:49.155 [2024-04-17 08:37:22.301350] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:40:49.155 [2024-04-17 08:37:22.301352] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a5c270): datao=0, datal=4096, cccid=4 00:40:49.155 [2024-04-17 08:37:22.301356] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a9bc50) on tqpair(0x1a5c270): expected_datao=0, payload_size=4096 00:40:49.155 [2024-04-17 08:37:22.301365] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:40:49.155 [2024-04-17 08:37:22.301370] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:40:49.155 [2024-04-17 08:37:22.301380] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:49.155 [2024-04-17 08:37:22.301386] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:49.155 [2024-04-17 08:37:22.301388] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:49.155 [2024-04-17 08:37:22.301391] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9bc50) on tqpair=0x1a5c270 00:40:49.155 [2024-04-17 08:37:22.301419] nvme_ctrlr.c:4542:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:40:49.155 [2024-04-17 08:37:22.301429] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:40:49.155 [2024-04-17 08:37:22.301438] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:40:49.155 [2024-04-17 08:37:22.301443] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:49.155 [2024-04-17 08:37:22.301446] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:49.155 [2024-04-17 08:37:22.301449] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a5c270) 00:40:49.155 [2024-04-17 08:37:22.301455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.155 [2024-04-17 08:37:22.301471] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9bc50, cid 4, qid 0 00:40:49.155 [2024-04-17 08:37:22.301543] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:40:49.155 [2024-04-17 08:37:22.301554] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:40:49.155 [2024-04-17 08:37:22.301559] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:40:49.155 [2024-04-17 08:37:22.301563] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a5c270): datao=0, datal=4096, cccid=4 00:40:49.155 [2024-04-17 08:37:22.301570] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a9bc50) on tqpair(0x1a5c270): expected_datao=0, payload_size=4096 00:40:49.155 [2024-04-17 08:37:22.301578] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:40:49.155 [2024-04-17 08:37:22.301582] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:40:49.155 [2024-04-17 08:37:22.301589] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:49.155 [2024-04-17 08:37:22.301594] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:49.155 [2024-04-17 08:37:22.301597] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:49.155 [2024-04-17 08:37:22.301600] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9bc50) on tqpair=0x1a5c270 00:40:49.155 [2024-04-17 08:37:22.301615] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:40:49.155 [2024-04-17 08:37:22.301622] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:40:49.155 [2024-04-17 08:37:22.301629] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:49.155 [2024-04-17 08:37:22.301632] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:49.155 [2024-04-17 08:37:22.301635] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a5c270) 00:40:49.155 [2024-04-17 08:37:22.301641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.155 [2024-04-17 08:37:22.301659] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9bc50, cid 4, qid 0 00:40:49.155 [2024-04-17 08:37:22.301722] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:40:49.155 [2024-04-17 08:37:22.301733] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:40:49.155 [2024-04-17 08:37:22.301736] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:40:49.155 [2024-04-17 08:37:22.301739] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a5c270): datao=0, datal=4096, cccid=4 00:40:49.155 [2024-04-17 08:37:22.301742] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a9bc50) on tqpair(0x1a5c270): expected_datao=0, payload_size=4096 00:40:49.155 [2024-04-17 08:37:22.301748] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:40:49.155 [2024-04-17 08:37:22.301751] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:40:49.155 [2024-04-17 08:37:22.301757] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:49.155 [2024-04-17 08:37:22.301762] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:49.155 [2024-04-17 08:37:22.301765] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:49.155 [2024-04-17 08:37:22.301768] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9bc50) on tqpair=0x1a5c270 00:40:49.155 [2024-04-17 08:37:22.301775] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:40:49.155 [2024-04-17 08:37:22.301781] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:40:49.155 [2024-04-17 08:37:22.301792] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:40:49.155 [2024-04-17 08:37:22.301797] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:40:49.155 [2024-04-17 08:37:22.301801] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:40:49.155 [2024-04-17 08:37:22.301805] nvme_ctrlr.c:2977:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:40:49.155 [2024-04-17 08:37:22.301808] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:40:49.155 [2024-04-17 08:37:22.301812] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:40:49.155 [2024-04-17 08:37:22.301826] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:49.155 [2024-04-17 08:37:22.301830] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:49.155 [2024-04-17 08:37:22.301833] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a5c270) 00:40:49.155 [2024-04-17 08:37:22.301838] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.155 [2024-04-17 08:37:22.301844] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:49.155 [2024-04-17 08:37:22.301846] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:49.155 [2024-04-17 08:37:22.301849] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a5c270) 00:40:49.155 [2024-04-17 08:37:22.301854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:40:49.155 [2024-04-17 08:37:22.301883] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9bc50, cid 4, qid 0 00:40:49.155 [2024-04-17 08:37:22.301888] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9bdb0, cid 5, qid 0 00:40:49.155 [2024-04-17 08:37:22.301958] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:49.155 [2024-04-17 08:37:22.301963] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:49.155 [2024-04-17 08:37:22.301966] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:49.155 [2024-04-17 08:37:22.301969] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9bc50) on tqpair=0x1a5c270 00:40:49.155 [2024-04-17 08:37:22.301975] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:49.155 [2024-04-17 08:37:22.301980] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:49.155 [2024-04-17 08:37:22.301982] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:49.155 [2024-04-17 08:37:22.301986] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9bdb0) on tqpair=0x1a5c270 00:40:49.155 [2024-04-17 08:37:22.301993] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:49.155 [2024-04-17 08:37:22.301996] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:49.155 [2024-04-17 08:37:22.301999] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a5c270) 00:40:49.155 [2024-04-17 08:37:22.302004] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.155 [2024-04-17 08:37:22.302017] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9bdb0, cid 5, qid 0 00:40:49.155 [2024-04-17 08:37:22.302071] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:49.155 [2024-04-17 08:37:22.302076] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:49.155 [2024-04-17 08:37:22.302079] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:49.155 [2024-04-17 08:37:22.302081] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9bdb0) on tqpair=0x1a5c270 00:40:49.155 [2024-04-17 08:37:22.302089] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:49.155 [2024-04-17 08:37:22.302092] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:49.155 [2024-04-17 08:37:22.302095] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a5c270) 00:40:49.155 [2024-04-17 08:37:22.302100] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.155 [2024-04-17 08:37:22.302112] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9bdb0, cid 5, qid 0 00:40:49.155 [2024-04-17 08:37:22.302166] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:49.155 [2024-04-17 08:37:22.302171] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:49.156 [2024-04-17 08:37:22.302174] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:49.156 [2024-04-17 08:37:22.302177] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9bdb0) on tqpair=0x1a5c270 00:40:49.156 [2024-04-17 08:37:22.302184] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:49.156 [2024-04-17 08:37:22.302187] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:49.156 [2024-04-17 08:37:22.302190] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a5c270) 00:40:49.156 [2024-04-17 08:37:22.302196] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.156 [2024-04-17 08:37:22.302208] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9bdb0, cid 5, qid 0 00:40:49.156 [2024-04-17 08:37:22.302261] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:49.156 [2024-04-17 08:37:22.302266] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:49.156 [2024-04-17 08:37:22.302269] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:49.156 [2024-04-17 08:37:22.302272] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9bdb0) on tqpair=0x1a5c270 00:40:49.156 [2024-04-17 08:37:22.302282] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:49.156 [2024-04-17 08:37:22.302285] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:49.156 [2024-04-17 08:37:22.302288] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a5c270) 00:40:49.156 [2024-04-17 08:37:22.302294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.156 [2024-04-17 08:37:22.302299] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:49.156 [2024-04-17 08:37:22.302302] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:49.156 [2024-04-17 08:37:22.302305] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a5c270) 00:40:49.156 [2024-04-17 08:37:22.302310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.156 [2024-04-17 08:37:22.302316] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:49.156 [2024-04-17 08:37:22.302318] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:49.156 [2024-04-17 08:37:22.302321] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1a5c270) 00:40:49.156 [2024-04-17 08:37:22.302326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.156 [2024-04-17 08:37:22.302332] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:49.156 [2024-04-17 08:37:22.302335] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:49.156 [2024-04-17 08:37:22.302338] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1a5c270) 00:40:49.156 [2024-04-17 08:37:22.302343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.156 [2024-04-17 08:37:22.302358] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9bdb0, cid 5, qid 0 00:40:49.156 [2024-04-17 08:37:22.302364] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9bc50, cid 4, qid 0 00:40:49.156 [2024-04-17 08:37:22.302369] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9bf10, cid 6, qid 0 00:40:49.156 [2024-04-17 08:37:22.302375] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9c070, cid 7, qid 0 00:40:49.156 [2024-04-17 08:37:22.302522] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:40:49.156 [2024-04-17 08:37:22.302545] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:40:49.156 [2024-04-17 08:37:22.302548] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:40:49.156 [2024-04-17 08:37:22.302551] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a5c270): datao=0, datal=8192, cccid=5 00:40:49.156 [2024-04-17 08:37:22.302555] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a9bdb0) on tqpair(0x1a5c270): expected_datao=0, payload_size=8192 00:40:49.156 [2024-04-17 08:37:22.302571] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:40:49.156 [2024-04-17 08:37:22.302574] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:40:49.156 [2024-04-17 08:37:22.302579] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:40:49.156 [2024-04-17 08:37:22.302584] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:40:49.156 [2024-04-17 08:37:22.302586] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:40:49.156 [2024-04-17 08:37:22.302589] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a5c270): datao=0, datal=512, cccid=4 00:40:49.156 [2024-04-17 08:37:22.302592] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a9bc50) on tqpair(0x1a5c270): expected_datao=0, payload_size=512 00:40:49.156 [2024-04-17 08:37:22.302598] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:40:49.156 [2024-04-17 08:37:22.302601] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:40:49.156 [2024-04-17 08:37:22.302606] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:40:49.156 [2024-04-17 08:37:22.302611] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:40:49.156 [2024-04-17 08:37:22.302613] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:40:49.156 [2024-04-17 08:37:22.302615] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a5c270): datao=0, datal=512, cccid=6 00:40:49.156 [2024-04-17 08:37:22.302619] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a9bf10) on tqpair(0x1a5c270): expected_datao=0, payload_size=512 00:40:49.156 [2024-04-17 08:37:22.302625] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:40:49.156 [2024-04-17 08:37:22.302627] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:40:49.156 [2024-04-17 08:37:22.302633] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:40:49.156 [2024-04-17 08:37:22.302640] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:40:49.156 [2024-04-17 08:37:22.302644] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:40:49.156 [2024-04-17 08:37:22.302648] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a5c270): datao=0, datal=4096, cccid=7 00:40:49.156 [2024-04-17 08:37:22.302653] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a9c070) on tqpair(0x1a5c270): expected_datao=0, payload_size=4096 00:40:49.156 [2024-04-17 08:37:22.302662] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:40:49.156 [2024-04-17 08:37:22.302667] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:40:49.156 [2024-04-17 08:37:22.302673] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:49.156 [2024-04-17 08:37:22.302680] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:49.156 [2024-04-17 08:37:22.302684] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:49.156 [2024-04-17 08:37:22.302689] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9bdb0) on tqpair=0x1a5c270 00:40:49.156 [2024-04-17 08:37:22.302708] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:49.156 [2024-04-17 08:37:22.302713] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:49.156 [2024-04-17 08:37:22.302716] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:49.156 [2024-04-17 08:37:22.302719] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9bc50) on tqpair=0x1a5c270 00:40:49.156 [2024-04-17 08:37:22.302728] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:49.156 [2024-04-17 08:37:22.302735] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:49.156 [2024-04-17 08:37:22.302739] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:49.156 [2024-04-17 08:37:22.302743] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9bf10) on tqpair=0x1a5c270 00:40:49.156 [2024-04-17 08:37:22.302753] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:49.156 [2024-04-17 08:37:22.302760] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:49.156 [2024-04-17 08:37:22.302765] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:49.156 [2024-04-17 08:37:22.302770] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9c070) on tqpair=0x1a5c270 00:40:49.156 ===================================================== 00:40:49.156 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:49.156 ===================================================== 00:40:49.156 Controller Capabilities/Features 00:40:49.156 ================================ 00:40:49.156 Vendor ID: 8086 00:40:49.156 Subsystem Vendor ID: 8086 00:40:49.156 Serial Number: SPDK00000000000001 00:40:49.156 Model Number: SPDK bdev Controller 00:40:49.156 Firmware Version: 24.01.1 00:40:49.156 Recommended Arb Burst: 6 00:40:49.156 IEEE OUI Identifier: e4 d2 5c 00:40:49.156 Multi-path I/O 00:40:49.156 May have multiple subsystem ports: Yes 00:40:49.156 May have multiple controllers: Yes 00:40:49.156 Associated with SR-IOV VF: No 00:40:49.156 Max Data Transfer Size: 131072 00:40:49.156 Max Number of Namespaces: 32 00:40:49.156 Max Number of I/O Queues: 127 00:40:49.156 NVMe Specification Version (VS): 1.3 00:40:49.156 NVMe Specification Version (Identify): 1.3 00:40:49.156 Maximum Queue Entries: 128 00:40:49.156 Contiguous Queues Required: Yes 00:40:49.156 Arbitration Mechanisms Supported 00:40:49.156 Weighted Round Robin: Not Supported 00:40:49.156 Vendor Specific: Not Supported 00:40:49.156 Reset Timeout: 15000 ms 00:40:49.156 Doorbell Stride: 4 bytes 00:40:49.156 NVM Subsystem Reset: Not Supported 00:40:49.156 Command Sets Supported 00:40:49.156 NVM Command Set: Supported 00:40:49.156 Boot Partition: Not Supported 00:40:49.156 Memory Page Size Minimum: 4096 bytes 00:40:49.156 Memory Page Size Maximum: 4096 bytes 00:40:49.156 Persistent Memory Region: Not Supported 00:40:49.156 Optional Asynchronous Events Supported 00:40:49.156 Namespace Attribute Notices: Supported 00:40:49.156 Firmware Activation Notices: Not Supported 00:40:49.156 ANA Change Notices: Not Supported 00:40:49.156 PLE Aggregate Log Change Notices: Not Supported 00:40:49.156 LBA Status Info Alert Notices: Not Supported 00:40:49.156 EGE Aggregate Log Change Notices: Not Supported 00:40:49.156 Normal NVM Subsystem Shutdown event: Not Supported 00:40:49.156 Zone Descriptor Change Notices: Not Supported 00:40:49.156 Discovery Log Change Notices: Not Supported 00:40:49.156 Controller Attributes 00:40:49.156 128-bit Host Identifier: Supported 00:40:49.156 Non-Operational Permissive Mode: Not Supported 00:40:49.156 NVM Sets: Not Supported 00:40:49.156 Read Recovery Levels: Not Supported 00:40:49.156 Endurance Groups: Not Supported 00:40:49.156 Predictable Latency Mode: Not Supported 00:40:49.156 Traffic Based Keep ALive: Not Supported 00:40:49.156 Namespace Granularity: Not Supported 00:40:49.157 SQ Associations: Not Supported 00:40:49.157 UUID List: Not Supported 00:40:49.157 Multi-Domain Subsystem: Not Supported 00:40:49.157 Fixed Capacity Management: Not Supported 00:40:49.157 Variable Capacity Management: Not Supported 00:40:49.157 Delete Endurance Group: Not Supported 00:40:49.157 Delete NVM Set: Not Supported 00:40:49.157 Extended LBA Formats Supported: Not Supported 00:40:49.157 Flexible Data Placement Supported: Not Supported 00:40:49.157 00:40:49.157 Controller Memory Buffer Support 00:40:49.157 ================================ 00:40:49.157 Supported: No 00:40:49.157 00:40:49.157 Persistent Memory Region Support 00:40:49.157 ================================ 00:40:49.157 Supported: No 00:40:49.157 00:40:49.157 Admin Command Set Attributes 00:40:49.157 ============================ 00:40:49.157 Security Send/Receive: Not Supported 00:40:49.157 Format NVM: Not Supported 00:40:49.157 Firmware Activate/Download: Not Supported 00:40:49.157 Namespace Management: Not Supported 00:40:49.157 Device Self-Test: Not Supported 00:40:49.157 Directives: Not Supported 00:40:49.157 NVMe-MI: Not Supported 00:40:49.157 Virtualization Management: Not Supported 00:40:49.157 Doorbell Buffer Config: Not Supported 00:40:49.157 Get LBA Status Capability: Not Supported 00:40:49.157 Command & Feature Lockdown Capability: Not Supported 00:40:49.157 Abort Command Limit: 4 00:40:49.157 Async Event Request Limit: 4 00:40:49.157 Number of Firmware Slots: N/A 00:40:49.157 Firmware Slot 1 Read-Only: N/A 00:40:49.157 Firmware Activation Without Reset: N/A 00:40:49.157 Multiple Update Detection Support: N/A 00:40:49.157 Firmware Update Granularity: No Information Provided 00:40:49.157 Per-Namespace SMART Log: No 00:40:49.157 Asymmetric Namespace Access Log Page: Not Supported 00:40:49.157 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:40:49.157 Command Effects Log Page: Supported 00:40:49.157 Get Log Page Extended Data: Supported 00:40:49.157 Telemetry Log Pages: Not Supported 00:40:49.157 Persistent Event Log Pages: Not Supported 00:40:49.157 Supported Log Pages Log Page: May Support 00:40:49.157 Commands Supported & Effects Log Page: Not Supported 00:40:49.157 Feature Identifiers & Effects Log Page:May Support 00:40:49.157 NVMe-MI Commands & Effects Log Page: May Support 00:40:49.157 Data Area 4 for Telemetry Log: Not Supported 00:40:49.157 Error Log Page Entries Supported: 128 00:40:49.157 Keep Alive: Supported 00:40:49.157 Keep Alive Granularity: 10000 ms 00:40:49.157 00:40:49.157 NVM Command Set Attributes 00:40:49.157 ========================== 00:40:49.157 Submission Queue Entry Size 00:40:49.157 Max: 64 00:40:49.157 Min: 64 00:40:49.157 Completion Queue Entry Size 00:40:49.157 Max: 16 00:40:49.157 Min: 16 00:40:49.157 Number of Namespaces: 32 00:40:49.157 Compare Command: Supported 00:40:49.157 Write Uncorrectable Command: Not Supported 00:40:49.157 Dataset Management Command: Supported 00:40:49.157 Write Zeroes Command: Supported 00:40:49.157 Set Features Save Field: Not Supported 00:40:49.157 Reservations: Supported 00:40:49.157 Timestamp: Not Supported 00:40:49.157 Copy: Supported 00:40:49.157 Volatile Write Cache: Present 00:40:49.157 Atomic Write Unit (Normal): 1 00:40:49.157 Atomic Write Unit (PFail): 1 00:40:49.157 Atomic Compare & Write Unit: 1 00:40:49.157 Fused Compare & Write: Supported 00:40:49.157 Scatter-Gather List 00:40:49.157 SGL Command Set: Supported 00:40:49.157 SGL Keyed: Supported 00:40:49.157 SGL Bit Bucket Descriptor: Not Supported 00:40:49.157 SGL Metadata Pointer: Not Supported 00:40:49.157 Oversized SGL: Not Supported 00:40:49.157 SGL Metadata Address: Not Supported 00:40:49.157 SGL Offset: Supported 00:40:49.157 Transport SGL Data Block: Not Supported 00:40:49.157 Replay Protected Memory Block: Not Supported 00:40:49.157 00:40:49.157 Firmware Slot Information 00:40:49.157 ========================= 00:40:49.157 Active slot: 1 00:40:49.157 Slot 1 Firmware Revision: 24.01.1 00:40:49.157 00:40:49.157 00:40:49.157 Commands Supported and Effects 00:40:49.157 ============================== 00:40:49.157 Admin Commands 00:40:49.157 -------------- 00:40:49.157 Get Log Page (02h): Supported 00:40:49.157 Identify (06h): Supported 00:40:49.157 Abort (08h): Supported 00:40:49.157 Set Features (09h): Supported 00:40:49.157 Get Features (0Ah): Supported 00:40:49.157 Asynchronous Event Request (0Ch): Supported 00:40:49.157 Keep Alive (18h): Supported 00:40:49.157 I/O Commands 00:40:49.157 ------------ 00:40:49.157 Flush (00h): Supported LBA-Change 00:40:49.157 Write (01h): Supported LBA-Change 00:40:49.157 Read (02h): Supported 00:40:49.157 Compare (05h): Supported 00:40:49.157 Write Zeroes (08h): Supported LBA-Change 00:40:49.157 Dataset Management (09h): Supported LBA-Change 00:40:49.157 Copy (19h): Supported LBA-Change 00:40:49.157 Unknown (79h): Supported LBA-Change 00:40:49.157 Unknown (7Ah): Supported 00:40:49.157 00:40:49.157 Error Log 00:40:49.157 ========= 00:40:49.157 00:40:49.157 Arbitration 00:40:49.157 =========== 00:40:49.157 Arbitration Burst: 1 00:40:49.157 00:40:49.157 Power Management 00:40:49.157 ================ 00:40:49.157 Number of Power States: 1 00:40:49.157 Current Power State: Power State #0 00:40:49.157 Power State #0: 00:40:49.157 Max Power: 0.00 W 00:40:49.157 Non-Operational State: Operational 00:40:49.157 Entry Latency: Not Reported 00:40:49.157 Exit Latency: Not Reported 00:40:49.157 Relative Read Throughput: 0 00:40:49.157 Relative Read Latency: 0 00:40:49.157 Relative Write Throughput: 0 00:40:49.157 Relative Write Latency: 0 00:40:49.157 Idle Power: Not Reported 00:40:49.157 Active Power: Not Reported 00:40:49.157 Non-Operational Permissive Mode: Not Supported 00:40:49.157 00:40:49.157 Health Information 00:40:49.157 ================== 00:40:49.157 Critical Warnings: 00:40:49.157 Available Spare Space: OK 00:40:49.157 Temperature: OK 00:40:49.157 Device Reliability: OK 00:40:49.157 Read Only: No 00:40:49.157 Volatile Memory Backup: OK 00:40:49.157 Current Temperature: 0 Kelvin (-273 Celsius) 00:40:49.157 Temperature Threshold: [2024-04-17 08:37:22.302883] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:49.157 [2024-04-17 08:37:22.302888] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:49.157 [2024-04-17 08:37:22.302891] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1a5c270) 00:40:49.157 [2024-04-17 08:37:22.302898] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.157 [2024-04-17 08:37:22.302925] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9c070, cid 7, qid 0 00:40:49.157 [2024-04-17 08:37:22.302985] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:49.157 [2024-04-17 08:37:22.302993] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:49.157 [2024-04-17 08:37:22.302996] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:49.157 [2024-04-17 08:37:22.303001] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9c070) on tqpair=0x1a5c270 00:40:49.157 [2024-04-17 08:37:22.303045] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:40:49.157 [2024-04-17 08:37:22.303060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:49.157 [2024-04-17 08:37:22.303068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:49.157 [2024-04-17 08:37:22.303076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:49.157 [2024-04-17 08:37:22.303084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:49.158 [2024-04-17 08:37:22.303094] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:49.158 [2024-04-17 08:37:22.303099] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:49.158 [2024-04-17 08:37:22.303104] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5c270) 00:40:49.158 [2024-04-17 08:37:22.303114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.158 [2024-04-17 08:37:22.303136] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9baf0, cid 3, qid 0 00:40:49.158 [2024-04-17 08:37:22.303187] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:49.158 [2024-04-17 08:37:22.303192] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:49.158 [2024-04-17 08:37:22.303195] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:49.158 [2024-04-17 08:37:22.303198] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9baf0) on tqpair=0x1a5c270 00:40:49.158 [2024-04-17 08:37:22.303204] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:49.158 [2024-04-17 08:37:22.303207] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:49.158 [2024-04-17 08:37:22.303210] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5c270) 00:40:49.158 [2024-04-17 08:37:22.303217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.158 [2024-04-17 08:37:22.303232] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9baf0, cid 3, qid 0 00:40:49.158 [2024-04-17 08:37:22.303298] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:49.158 [2024-04-17 08:37:22.303304] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:49.158 [2024-04-17 08:37:22.303306] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:49.158 [2024-04-17 08:37:22.303309] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9baf0) on tqpair=0x1a5c270 00:40:49.158 [2024-04-17 08:37:22.303314] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:40:49.158 [2024-04-17 08:37:22.303318] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:40:49.158 [2024-04-17 08:37:22.303325] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:49.158 [2024-04-17 08:37:22.303328] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:49.158 [2024-04-17 08:37:22.303331] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5c270) 00:40:49.158 [2024-04-17 08:37:22.303337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.158 [2024-04-17 08:37:22.303349] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9baf0, cid 3, qid 0 00:40:49.158 [2024-04-17 08:37:22.307417] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:49.158 [2024-04-17 08:37:22.307449] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:49.158 [2024-04-17 08:37:22.307456] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:49.158 [2024-04-17 08:37:22.307461] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9baf0) on tqpair=0x1a5c270 00:40:49.158 [2024-04-17 08:37:22.307482] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:40:49.158 [2024-04-17 08:37:22.307488] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:40:49.158 [2024-04-17 08:37:22.307493] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5c270) 00:40:49.158 [2024-04-17 08:37:22.307504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:49.158 [2024-04-17 08:37:22.307531] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9baf0, cid 3, qid 0 00:40:49.158 [2024-04-17 08:37:22.307586] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:40:49.158 [2024-04-17 08:37:22.307592] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:40:49.158 [2024-04-17 08:37:22.307595] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:40:49.158 [2024-04-17 08:37:22.307598] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9baf0) on tqpair=0x1a5c270 00:40:49.158 [2024-04-17 08:37:22.307604] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:40:49.158 0 Kelvin (-273 Celsius) 00:40:49.158 Available Spare: 0% 00:40:49.158 Available Spare Threshold: 0% 00:40:49.158 Life Percentage Used: 0% 00:40:49.158 Data Units Read: 0 00:40:49.158 Data Units Written: 0 00:40:49.158 Host Read Commands: 0 00:40:49.158 Host Write Commands: 0 00:40:49.158 Controller Busy Time: 0 minutes 00:40:49.158 Power Cycles: 0 00:40:49.158 Power On Hours: 0 hours 00:40:49.158 Unsafe Shutdowns: 0 00:40:49.158 Unrecoverable Media Errors: 0 00:40:49.158 Lifetime Error Log Entries: 0 00:40:49.158 Warning Temperature Time: 0 minutes 00:40:49.158 Critical Temperature Time: 0 minutes 00:40:49.158 00:40:49.158 Number of Queues 00:40:49.158 ================ 00:40:49.158 Number of I/O Submission Queues: 127 00:40:49.158 Number of I/O Completion Queues: 127 00:40:49.158 00:40:49.158 Active Namespaces 00:40:49.158 ================= 00:40:49.158 Namespace ID:1 00:40:49.158 Error Recovery Timeout: Unlimited 00:40:49.158 Command Set Identifier: NVM (00h) 00:40:49.158 Deallocate: Supported 00:40:49.158 Deallocated/Unwritten Error: Not Supported 00:40:49.158 Deallocated Read Value: Unknown 00:40:49.158 Deallocate in Write Zeroes: Not Supported 00:40:49.158 Deallocated Guard Field: 0xFFFF 00:40:49.158 Flush: Supported 00:40:49.158 Reservation: Supported 00:40:49.158 Namespace Sharing Capabilities: Multiple Controllers 00:40:49.158 Size (in LBAs): 131072 (0GiB) 00:40:49.158 Capacity (in LBAs): 131072 (0GiB) 00:40:49.158 Utilization (in LBAs): 131072 (0GiB) 00:40:49.158 NGUID: ABCDEF0123456789ABCDEF0123456789 00:40:49.158 EUI64: ABCDEF0123456789 00:40:49.158 UUID: 23503401-aa1c-4f96-9216-ad21c4abdc3c 00:40:49.158 Thin Provisioning: Not Supported 00:40:49.158 Per-NS Atomic Units: Yes 00:40:49.158 Atomic Boundary Size (Normal): 0 00:40:49.158 Atomic Boundary Size (PFail): 0 00:40:49.158 Atomic Boundary Offset: 0 00:40:49.158 Maximum Single Source Range Length: 65535 00:40:49.158 Maximum Copy Length: 65535 00:40:49.158 Maximum Source Range Count: 1 00:40:49.158 NGUID/EUI64 Never Reused: No 00:40:49.158 Namespace Write Protected: No 00:40:49.158 Number of LBA Formats: 1 00:40:49.158 Current LBA Format: LBA Format #00 00:40:49.158 LBA Format #00: Data Size: 512 Metadata Size: 0 00:40:49.158 00:40:49.158 08:37:22 -- host/identify.sh@51 -- # sync 00:40:49.158 08:37:22 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:49.158 08:37:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:40:49.158 08:37:22 -- common/autotest_common.sh@10 -- # set +x 00:40:49.158 08:37:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:40:49.158 08:37:22 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:40:49.158 08:37:22 -- host/identify.sh@56 -- # nvmftestfini 00:40:49.158 08:37:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:40:49.158 08:37:22 -- nvmf/common.sh@116 -- # sync 00:40:49.158 08:37:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:40:49.158 08:37:22 -- nvmf/common.sh@119 -- # set +e 00:40:49.158 08:37:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:40:49.158 08:37:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:40:49.158 rmmod nvme_tcp 00:40:49.158 rmmod nvme_fabrics 00:40:49.158 rmmod nvme_keyring 00:40:49.158 08:37:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:40:49.158 08:37:22 -- nvmf/common.sh@123 -- # set -e 00:40:49.158 08:37:22 -- nvmf/common.sh@124 -- # return 0 00:40:49.158 08:37:22 -- nvmf/common.sh@477 -- # '[' -n 81080 ']' 00:40:49.158 08:37:22 -- nvmf/common.sh@478 -- # killprocess 81080 00:40:49.158 08:37:22 -- common/autotest_common.sh@926 -- # '[' -z 81080 ']' 00:40:49.158 08:37:22 -- common/autotest_common.sh@930 -- # kill -0 81080 00:40:49.158 08:37:22 -- common/autotest_common.sh@931 -- # uname 00:40:49.158 08:37:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:40:49.158 08:37:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 81080 00:40:49.418 08:37:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:40:49.418 08:37:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:40:49.418 08:37:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 81080' 00:40:49.418 killing process with pid 81080 00:40:49.418 08:37:22 -- common/autotest_common.sh@945 -- # kill 81080 00:40:49.418 [2024-04-17 08:37:22.486807] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transpor 08:37:22 -- common/autotest_common.sh@950 -- # wait 81080 00:40:49.418 t is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:40:49.418 08:37:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:40:49.418 08:37:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:40:49.418 08:37:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:40:49.418 08:37:22 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:49.418 08:37:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:40:49.418 08:37:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:49.418 08:37:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:49.418 08:37:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:49.678 08:37:22 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:40:49.678 00:40:49.678 real 0m2.588s 00:40:49.678 user 0m7.001s 00:40:49.678 sys 0m0.680s 00:40:49.678 08:37:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:49.678 08:37:22 -- common/autotest_common.sh@10 -- # set +x 00:40:49.678 ************************************ 00:40:49.678 END TEST nvmf_identify 00:40:49.678 ************************************ 00:40:49.678 08:37:22 -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:40:49.678 08:37:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:40:49.678 08:37:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:40:49.678 08:37:22 -- common/autotest_common.sh@10 -- # set +x 00:40:49.678 ************************************ 00:40:49.678 START TEST nvmf_perf 00:40:49.678 ************************************ 00:40:49.678 08:37:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:40:49.678 * Looking for test storage... 00:40:49.678 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:40:49.678 08:37:22 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:40:49.678 08:37:22 -- nvmf/common.sh@7 -- # uname -s 00:40:49.678 08:37:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:49.678 08:37:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:49.678 08:37:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:49.678 08:37:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:49.678 08:37:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:49.678 08:37:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:49.678 08:37:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:49.678 08:37:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:49.678 08:37:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:49.678 08:37:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:49.678 08:37:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:40:49.678 08:37:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:40:49.678 08:37:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:49.678 08:37:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:49.678 08:37:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:40:49.678 08:37:23 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:49.678 08:37:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:49.678 08:37:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:49.678 08:37:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:49.938 08:37:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:49.938 08:37:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:49.938 08:37:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:49.938 08:37:23 -- paths/export.sh@5 -- # export PATH 00:40:49.938 08:37:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:49.938 08:37:23 -- nvmf/common.sh@46 -- # : 0 00:40:49.938 08:37:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:40:49.938 08:37:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:40:49.938 08:37:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:40:49.938 08:37:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:49.938 08:37:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:49.938 08:37:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:40:49.938 08:37:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:40:49.938 08:37:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:40:49.938 08:37:23 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:40:49.938 08:37:23 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:40:49.938 08:37:23 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:49.938 08:37:23 -- host/perf.sh@17 -- # nvmftestinit 00:40:49.938 08:37:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:40:49.938 08:37:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:49.938 08:37:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:40:49.938 08:37:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:40:49.938 08:37:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:40:49.938 08:37:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:49.938 08:37:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:49.938 08:37:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:49.938 08:37:23 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:40:49.938 08:37:23 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:40:49.938 08:37:23 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:40:49.938 08:37:23 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:40:49.938 08:37:23 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:40:49.938 08:37:23 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:40:49.938 08:37:23 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:49.938 08:37:23 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:49.938 08:37:23 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:40:49.938 08:37:23 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:40:49.938 08:37:23 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:40:49.938 08:37:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:40:49.938 08:37:23 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:40:49.938 08:37:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:49.938 08:37:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:40:49.938 08:37:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:40:49.938 08:37:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:40:49.938 08:37:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:40:49.938 08:37:23 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:40:49.938 08:37:23 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:40:49.938 Cannot find device "nvmf_tgt_br" 00:40:49.938 08:37:23 -- nvmf/common.sh@154 -- # true 00:40:49.938 08:37:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:40:49.938 Cannot find device "nvmf_tgt_br2" 00:40:49.938 08:37:23 -- nvmf/common.sh@155 -- # true 00:40:49.939 08:37:23 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:40:49.939 08:37:23 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:40:49.939 Cannot find device "nvmf_tgt_br" 00:40:49.939 08:37:23 -- nvmf/common.sh@157 -- # true 00:40:49.939 08:37:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:40:49.939 Cannot find device "nvmf_tgt_br2" 00:40:49.939 08:37:23 -- nvmf/common.sh@158 -- # true 00:40:49.939 08:37:23 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:40:49.939 08:37:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:40:49.939 08:37:23 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:40:49.939 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:49.939 08:37:23 -- nvmf/common.sh@161 -- # true 00:40:49.939 08:37:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:40:49.939 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:49.939 08:37:23 -- nvmf/common.sh@162 -- # true 00:40:49.939 08:37:23 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:40:49.939 08:37:23 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:40:49.939 08:37:23 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:40:49.939 08:37:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:40:49.939 08:37:23 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:40:49.939 08:37:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:40:50.198 08:37:23 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:40:50.198 08:37:23 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:40:50.198 08:37:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:40:50.198 08:37:23 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:40:50.198 08:37:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:40:50.198 08:37:23 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:40:50.198 08:37:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:40:50.198 08:37:23 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:40:50.198 08:37:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:40:50.198 08:37:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:40:50.198 08:37:23 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:40:50.198 08:37:23 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:40:50.198 08:37:23 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:40:50.198 08:37:23 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:40:50.198 08:37:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:40:50.198 08:37:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:40:50.198 08:37:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:40:50.198 08:37:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:40:50.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:50.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:40:50.198 00:40:50.198 --- 10.0.0.2 ping statistics --- 00:40:50.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:50.198 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:40:50.198 08:37:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:40:50.198 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:40:50.198 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:40:50.198 00:40:50.198 --- 10.0.0.3 ping statistics --- 00:40:50.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:50.198 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:40:50.198 08:37:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:40:50.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:50.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:40:50.198 00:40:50.198 --- 10.0.0.1 ping statistics --- 00:40:50.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:50.198 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:40:50.198 08:37:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:50.198 08:37:23 -- nvmf/common.sh@421 -- # return 0 00:40:50.198 08:37:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:40:50.198 08:37:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:50.198 08:37:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:40:50.198 08:37:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:40:50.198 08:37:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:50.198 08:37:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:40:50.198 08:37:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:40:50.198 08:37:23 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:40:50.198 08:37:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:40:50.198 08:37:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:40:50.198 08:37:23 -- common/autotest_common.sh@10 -- # set +x 00:40:50.198 08:37:23 -- nvmf/common.sh@469 -- # nvmfpid=81308 00:40:50.198 08:37:23 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:40:50.198 08:37:23 -- nvmf/common.sh@470 -- # waitforlisten 81308 00:40:50.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:50.198 08:37:23 -- common/autotest_common.sh@819 -- # '[' -z 81308 ']' 00:40:50.198 08:37:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:50.198 08:37:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:40:50.198 08:37:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:50.198 08:37:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:40:50.198 08:37:23 -- common/autotest_common.sh@10 -- # set +x 00:40:50.198 [2024-04-17 08:37:23.518765] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:40:50.198 [2024-04-17 08:37:23.518850] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:50.458 [2024-04-17 08:37:23.661265] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:50.458 [2024-04-17 08:37:23.766590] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:40:50.458 [2024-04-17 08:37:23.766819] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:50.458 [2024-04-17 08:37:23.766853] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:50.458 [2024-04-17 08:37:23.766895] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:50.458 [2024-04-17 08:37:23.767052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:40:50.458 [2024-04-17 08:37:23.767375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:40:50.458 [2024-04-17 08:37:23.767566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:50.458 [2024-04-17 08:37:23.767570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:40:51.394 08:37:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:40:51.394 08:37:24 -- common/autotest_common.sh@852 -- # return 0 00:40:51.394 08:37:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:40:51.394 08:37:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:40:51.394 08:37:24 -- common/autotest_common.sh@10 -- # set +x 00:40:51.394 08:37:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:51.394 08:37:24 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:40:51.394 08:37:24 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:40:51.653 08:37:24 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:40:51.653 08:37:24 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:40:51.912 08:37:25 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:40:51.912 08:37:25 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:52.169 08:37:25 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:40:52.169 08:37:25 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:40:52.169 08:37:25 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:40:52.169 08:37:25 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:40:52.169 08:37:25 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:40:52.427 [2024-04-17 08:37:25.513334] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:52.427 08:37:25 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:52.427 08:37:25 -- host/perf.sh@45 -- # for bdev in $bdevs 00:40:52.427 08:37:25 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:52.702 08:37:25 -- host/perf.sh@45 -- # for bdev in $bdevs 00:40:52.702 08:37:25 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:40:52.960 08:37:26 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:53.218 [2024-04-17 08:37:26.346229] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:53.218 08:37:26 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:53.477 08:37:26 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:40:53.478 08:37:26 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:40:53.478 08:37:26 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:40:53.478 08:37:26 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:40:54.412 Initializing NVMe Controllers 00:40:54.412 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:40:54.412 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:40:54.412 Initialization complete. Launching workers. 00:40:54.412 ======================================================== 00:40:54.412 Latency(us) 00:40:54.412 Device Information : IOPS MiB/s Average min max 00:40:54.412 PCIE (0000:00:06.0) NSID 1 from core 0: 27479.59 107.34 1163.59 262.27 7859.48 00:40:54.412 ======================================================== 00:40:54.412 Total : 27479.59 107.34 1163.59 262.27 7859.48 00:40:54.412 00:40:54.412 08:37:27 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:40:55.789 Initializing NVMe Controllers 00:40:55.789 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:55.789 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:40:55.789 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:40:55.789 Initialization complete. Launching workers. 00:40:55.789 ======================================================== 00:40:55.789 Latency(us) 00:40:55.789 Device Information : IOPS MiB/s Average min max 00:40:55.789 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4311.13 16.84 228.88 91.45 4288.54 00:40:55.789 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 122.89 0.48 8136.93 6997.79 12052.34 00:40:55.789 ======================================================== 00:40:55.789 Total : 4434.02 17.32 448.05 91.45 12052.34 00:40:55.789 00:40:55.789 08:37:29 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:40:57.168 Initializing NVMe Controllers 00:40:57.168 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:57.168 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:40:57.168 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:40:57.168 Initialization complete. Launching workers. 00:40:57.168 ======================================================== 00:40:57.168 Latency(us) 00:40:57.168 Device Information : IOPS MiB/s Average min max 00:40:57.168 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9100.11 35.55 3517.09 527.38 9851.33 00:40:57.168 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2633.87 10.29 12210.73 4327.62 23328.10 00:40:57.168 ======================================================== 00:40:57.168 Total : 11733.98 45.84 5468.51 527.38 23328.10 00:40:57.168 00:40:57.168 08:37:30 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:40:57.168 08:37:30 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:40:59.711 Initializing NVMe Controllers 00:40:59.711 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:59.711 Controller IO queue size 128, less than required. 00:40:59.711 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:59.711 Controller IO queue size 128, less than required. 00:40:59.711 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:59.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:40:59.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:40:59.711 Initialization complete. Launching workers. 00:40:59.711 ======================================================== 00:40:59.711 Latency(us) 00:40:59.711 Device Information : IOPS MiB/s Average min max 00:40:59.711 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1658.14 414.54 79720.78 46185.79 167826.83 00:40:59.711 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 544.88 136.22 251172.93 130989.69 378319.93 00:40:59.711 ======================================================== 00:40:59.711 Total : 2203.02 550.76 122126.69 46185.79 378319.93 00:40:59.711 00:40:59.711 08:37:32 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:40:59.969 No valid NVMe controllers or AIO or URING devices found 00:40:59.969 Initializing NVMe Controllers 00:40:59.969 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:59.969 Controller IO queue size 128, less than required. 00:40:59.969 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:59.969 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:40:59.969 Controller IO queue size 128, less than required. 00:40:59.969 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:59.969 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:40:59.969 WARNING: Some requested NVMe devices were skipped 00:40:59.969 08:37:33 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:41:02.505 Initializing NVMe Controllers 00:41:02.505 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:02.505 Controller IO queue size 128, less than required. 00:41:02.505 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:41:02.505 Controller IO queue size 128, less than required. 00:41:02.505 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:41:02.505 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:41:02.505 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:41:02.505 Initialization complete. Launching workers. 00:41:02.505 00:41:02.505 ==================== 00:41:02.505 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:41:02.505 TCP transport: 00:41:02.505 polls: 9854 00:41:02.505 idle_polls: 6270 00:41:02.505 sock_completions: 3584 00:41:02.505 nvme_completions: 4996 00:41:02.505 submitted_requests: 7622 00:41:02.505 queued_requests: 1 00:41:02.505 00:41:02.505 ==================== 00:41:02.505 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:41:02.505 TCP transport: 00:41:02.505 polls: 9814 00:41:02.505 idle_polls: 6506 00:41:02.505 sock_completions: 3308 00:41:02.505 nvme_completions: 6447 00:41:02.505 submitted_requests: 9883 00:41:02.505 queued_requests: 1 00:41:02.505 ======================================================== 00:41:02.505 Latency(us) 00:41:02.505 Device Information : IOPS MiB/s Average min max 00:41:02.505 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1311.81 327.95 100442.94 66098.20 166863.54 00:41:02.505 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1674.13 418.53 76937.63 31185.58 139710.03 00:41:02.505 ======================================================== 00:41:02.505 Total : 2985.94 746.48 87264.23 31185.58 166863.54 00:41:02.505 00:41:02.505 08:37:35 -- host/perf.sh@66 -- # sync 00:41:02.505 08:37:35 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:02.763 08:37:35 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:41:02.763 08:37:35 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:41:02.763 08:37:35 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:41:03.021 08:37:36 -- host/perf.sh@72 -- # ls_guid=685c3671-54ce-497a-9d80-cd3b7e501ce8 00:41:03.021 08:37:36 -- host/perf.sh@73 -- # get_lvs_free_mb 685c3671-54ce-497a-9d80-cd3b7e501ce8 00:41:03.021 08:37:36 -- common/autotest_common.sh@1343 -- # local lvs_uuid=685c3671-54ce-497a-9d80-cd3b7e501ce8 00:41:03.021 08:37:36 -- common/autotest_common.sh@1344 -- # local lvs_info 00:41:03.021 08:37:36 -- common/autotest_common.sh@1345 -- # local fc 00:41:03.021 08:37:36 -- common/autotest_common.sh@1346 -- # local cs 00:41:03.021 08:37:36 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:41:03.279 08:37:36 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:41:03.279 { 00:41:03.279 "base_bdev": "Nvme0n1", 00:41:03.279 "block_size": 4096, 00:41:03.279 "cluster_size": 4194304, 00:41:03.279 "free_clusters": 1278, 00:41:03.279 "name": "lvs_0", 00:41:03.279 "total_data_clusters": 1278, 00:41:03.279 "uuid": "685c3671-54ce-497a-9d80-cd3b7e501ce8" 00:41:03.279 } 00:41:03.279 ]' 00:41:03.279 08:37:36 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="685c3671-54ce-497a-9d80-cd3b7e501ce8") .free_clusters' 00:41:03.279 08:37:36 -- common/autotest_common.sh@1348 -- # fc=1278 00:41:03.279 08:37:36 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="685c3671-54ce-497a-9d80-cd3b7e501ce8") .cluster_size' 00:41:03.279 08:37:36 -- common/autotest_common.sh@1349 -- # cs=4194304 00:41:03.279 08:37:36 -- common/autotest_common.sh@1352 -- # free_mb=5112 00:41:03.279 5112 00:41:03.279 08:37:36 -- common/autotest_common.sh@1353 -- # echo 5112 00:41:03.279 08:37:36 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:41:03.279 08:37:36 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 685c3671-54ce-497a-9d80-cd3b7e501ce8 lbd_0 5112 00:41:03.537 08:37:36 -- host/perf.sh@80 -- # lb_guid=604e8d6c-19b4-4741-a3da-a9f14a9e81a6 00:41:03.537 08:37:36 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 604e8d6c-19b4-4741-a3da-a9f14a9e81a6 lvs_n_0 00:41:03.794 08:37:36 -- host/perf.sh@83 -- # ls_nested_guid=2047ab2c-3631-4c3a-baff-692679e1b435 00:41:03.794 08:37:36 -- host/perf.sh@84 -- # get_lvs_free_mb 2047ab2c-3631-4c3a-baff-692679e1b435 00:41:03.794 08:37:36 -- common/autotest_common.sh@1343 -- # local lvs_uuid=2047ab2c-3631-4c3a-baff-692679e1b435 00:41:03.794 08:37:36 -- common/autotest_common.sh@1344 -- # local lvs_info 00:41:03.794 08:37:36 -- common/autotest_common.sh@1345 -- # local fc 00:41:03.794 08:37:36 -- common/autotest_common.sh@1346 -- # local cs 00:41:03.794 08:37:37 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:41:04.055 08:37:37 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:41:04.055 { 00:41:04.055 "base_bdev": "Nvme0n1", 00:41:04.055 "block_size": 4096, 00:41:04.055 "cluster_size": 4194304, 00:41:04.055 "free_clusters": 0, 00:41:04.055 "name": "lvs_0", 00:41:04.055 "total_data_clusters": 1278, 00:41:04.055 "uuid": "685c3671-54ce-497a-9d80-cd3b7e501ce8" 00:41:04.055 }, 00:41:04.055 { 00:41:04.055 "base_bdev": "604e8d6c-19b4-4741-a3da-a9f14a9e81a6", 00:41:04.055 "block_size": 4096, 00:41:04.055 "cluster_size": 4194304, 00:41:04.055 "free_clusters": 1276, 00:41:04.055 "name": "lvs_n_0", 00:41:04.055 "total_data_clusters": 1276, 00:41:04.055 "uuid": "2047ab2c-3631-4c3a-baff-692679e1b435" 00:41:04.055 } 00:41:04.055 ]' 00:41:04.055 08:37:37 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="2047ab2c-3631-4c3a-baff-692679e1b435") .free_clusters' 00:41:04.055 08:37:37 -- common/autotest_common.sh@1348 -- # fc=1276 00:41:04.055 08:37:37 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="2047ab2c-3631-4c3a-baff-692679e1b435") .cluster_size' 00:41:04.055 08:37:37 -- common/autotest_common.sh@1349 -- # cs=4194304 00:41:04.055 08:37:37 -- common/autotest_common.sh@1352 -- # free_mb=5104 00:41:04.055 5104 00:41:04.055 08:37:37 -- common/autotest_common.sh@1353 -- # echo 5104 00:41:04.055 08:37:37 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:41:04.313 08:37:37 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2047ab2c-3631-4c3a-baff-692679e1b435 lbd_nest_0 5104 00:41:04.313 08:37:37 -- host/perf.sh@88 -- # lb_nested_guid=18489995-1489-4b55-af53-6aa1b09e85f2 00:41:04.313 08:37:37 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:04.571 08:37:37 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:41:04.571 08:37:37 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 18489995-1489-4b55-af53-6aa1b09e85f2 00:41:04.828 08:37:38 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:05.085 08:37:38 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:41:05.085 08:37:38 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:41:05.085 08:37:38 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:41:05.085 08:37:38 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:41:05.085 08:37:38 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:05.343 No valid NVMe controllers or AIO or URING devices found 00:41:05.343 Initializing NVMe Controllers 00:41:05.343 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:05.343 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:41:05.343 WARNING: Some requested NVMe devices were skipped 00:41:05.343 08:37:38 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:41:05.343 08:37:38 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:17.548 Initializing NVMe Controllers 00:41:17.548 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:17.548 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:41:17.548 Initialization complete. Launching workers. 00:41:17.548 ======================================================== 00:41:17.548 Latency(us) 00:41:17.548 Device Information : IOPS MiB/s Average min max 00:41:17.548 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1191.61 148.95 838.53 268.24 7519.22 00:41:17.548 ======================================================== 00:41:17.548 Total : 1191.61 148.95 838.53 268.24 7519.22 00:41:17.548 00:41:17.548 08:37:48 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:41:17.548 08:37:48 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:41:17.548 08:37:48 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:17.549 No valid NVMe controllers or AIO or URING devices found 00:41:17.549 Initializing NVMe Controllers 00:41:17.549 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:17.549 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:41:17.549 WARNING: Some requested NVMe devices were skipped 00:41:17.549 08:37:49 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:41:17.549 08:37:49 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:27.544 Initializing NVMe Controllers 00:41:27.544 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:27.544 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:41:27.544 Initialization complete. Launching workers. 00:41:27.544 ======================================================== 00:41:27.544 Latency(us) 00:41:27.544 Device Information : IOPS MiB/s Average min max 00:41:27.544 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1096.52 137.06 29241.10 7794.13 281640.48 00:41:27.544 ======================================================== 00:41:27.544 Total : 1096.52 137.06 29241.10 7794.13 281640.48 00:41:27.544 00:41:27.544 08:37:59 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:41:27.544 08:37:59 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:41:27.544 08:37:59 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:27.544 No valid NVMe controllers or AIO or URING devices found 00:41:27.544 Initializing NVMe Controllers 00:41:27.544 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:27.544 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:41:27.544 WARNING: Some requested NVMe devices were skipped 00:41:27.544 08:37:59 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:41:27.544 08:37:59 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:37.516 Initializing NVMe Controllers 00:41:37.516 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:37.516 Controller IO queue size 128, less than required. 00:41:37.516 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:41:37.516 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:41:37.516 Initialization complete. Launching workers. 00:41:37.516 ======================================================== 00:41:37.516 Latency(us) 00:41:37.516 Device Information : IOPS MiB/s Average min max 00:41:37.516 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3934.54 491.82 32589.15 4683.27 88469.89 00:41:37.516 ======================================================== 00:41:37.516 Total : 3934.54 491.82 32589.15 4683.27 88469.89 00:41:37.516 00:41:37.516 08:38:10 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:37.516 08:38:10 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 18489995-1489-4b55-af53-6aa1b09e85f2 00:41:37.516 08:38:10 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:41:37.775 08:38:10 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 604e8d6c-19b4-4741-a3da-a9f14a9e81a6 00:41:37.775 08:38:11 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:41:38.035 08:38:11 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:41:38.035 08:38:11 -- host/perf.sh@114 -- # nvmftestfini 00:41:38.035 08:38:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:41:38.035 08:38:11 -- nvmf/common.sh@116 -- # sync 00:41:38.035 08:38:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:41:38.035 08:38:11 -- nvmf/common.sh@119 -- # set +e 00:41:38.035 08:38:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:41:38.035 08:38:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:41:38.035 rmmod nvme_tcp 00:41:38.035 rmmod nvme_fabrics 00:41:38.035 rmmod nvme_keyring 00:41:38.035 08:38:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:41:38.035 08:38:11 -- nvmf/common.sh@123 -- # set -e 00:41:38.035 08:38:11 -- nvmf/common.sh@124 -- # return 0 00:41:38.035 08:38:11 -- nvmf/common.sh@477 -- # '[' -n 81308 ']' 00:41:38.035 08:38:11 -- nvmf/common.sh@478 -- # killprocess 81308 00:41:38.035 08:38:11 -- common/autotest_common.sh@926 -- # '[' -z 81308 ']' 00:41:38.035 08:38:11 -- common/autotest_common.sh@930 -- # kill -0 81308 00:41:38.035 08:38:11 -- common/autotest_common.sh@931 -- # uname 00:41:38.035 08:38:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:41:38.035 08:38:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 81308 00:41:38.294 08:38:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:41:38.294 08:38:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:41:38.294 killing process with pid 81308 00:41:38.294 08:38:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 81308' 00:41:38.294 08:38:11 -- common/autotest_common.sh@945 -- # kill 81308 00:41:38.294 08:38:11 -- common/autotest_common.sh@950 -- # wait 81308 00:41:38.553 08:38:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:41:38.553 08:38:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:41:38.553 08:38:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:41:38.553 08:38:11 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:41:38.553 08:38:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:41:38.553 08:38:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:38.553 08:38:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:38.553 08:38:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:38.553 08:38:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:41:38.812 00:41:38.812 real 0m49.026s 00:41:38.812 user 3m4.288s 00:41:38.812 sys 0m10.209s 00:41:38.812 08:38:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:38.812 08:38:11 -- common/autotest_common.sh@10 -- # set +x 00:41:38.812 ************************************ 00:41:38.812 END TEST nvmf_perf 00:41:38.812 ************************************ 00:41:38.812 08:38:11 -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:41:38.812 08:38:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:41:38.812 08:38:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:41:38.812 08:38:11 -- common/autotest_common.sh@10 -- # set +x 00:41:38.812 ************************************ 00:41:38.812 START TEST nvmf_fio_host 00:41:38.812 ************************************ 00:41:38.812 08:38:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:41:38.812 * Looking for test storage... 00:41:38.812 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:41:38.812 08:38:12 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:38.812 08:38:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:38.812 08:38:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:38.812 08:38:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:38.812 08:38:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:38.812 08:38:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:38.812 08:38:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:38.812 08:38:12 -- paths/export.sh@5 -- # export PATH 00:41:38.812 08:38:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:38.812 08:38:12 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:41:38.812 08:38:12 -- nvmf/common.sh@7 -- # uname -s 00:41:38.812 08:38:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:38.812 08:38:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:38.812 08:38:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:38.812 08:38:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:38.812 08:38:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:38.812 08:38:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:38.812 08:38:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:38.812 08:38:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:38.812 08:38:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:38.812 08:38:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:38.812 08:38:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:41:38.812 08:38:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:41:38.812 08:38:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:38.812 08:38:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:38.812 08:38:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:41:38.812 08:38:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:38.812 08:38:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:38.812 08:38:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:38.813 08:38:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:38.813 08:38:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:38.813 08:38:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:38.813 08:38:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:38.813 08:38:12 -- paths/export.sh@5 -- # export PATH 00:41:38.813 08:38:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:38.813 08:38:12 -- nvmf/common.sh@46 -- # : 0 00:41:38.813 08:38:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:41:38.813 08:38:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:41:38.813 08:38:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:41:38.813 08:38:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:38.813 08:38:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:38.813 08:38:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:41:38.813 08:38:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:41:38.813 08:38:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:41:38.813 08:38:12 -- host/fio.sh@12 -- # nvmftestinit 00:41:38.813 08:38:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:41:38.813 08:38:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:38.813 08:38:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:41:38.813 08:38:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:41:38.813 08:38:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:41:38.813 08:38:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:38.813 08:38:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:38.813 08:38:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:38.813 08:38:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:41:38.813 08:38:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:41:38.813 08:38:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:41:38.813 08:38:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:41:38.813 08:38:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:41:38.813 08:38:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:41:38.813 08:38:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:38.813 08:38:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:38.813 08:38:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:41:38.813 08:38:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:41:38.813 08:38:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:41:38.813 08:38:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:41:38.813 08:38:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:41:38.813 08:38:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:38.813 08:38:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:41:38.813 08:38:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:41:38.813 08:38:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:41:38.813 08:38:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:41:38.813 08:38:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:41:39.072 08:38:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:41:39.072 Cannot find device "nvmf_tgt_br" 00:41:39.072 08:38:12 -- nvmf/common.sh@154 -- # true 00:41:39.072 08:38:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:41:39.072 Cannot find device "nvmf_tgt_br2" 00:41:39.072 08:38:12 -- nvmf/common.sh@155 -- # true 00:41:39.072 08:38:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:41:39.072 08:38:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:41:39.072 Cannot find device "nvmf_tgt_br" 00:41:39.072 08:38:12 -- nvmf/common.sh@157 -- # true 00:41:39.072 08:38:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:41:39.072 Cannot find device "nvmf_tgt_br2" 00:41:39.072 08:38:12 -- nvmf/common.sh@158 -- # true 00:41:39.072 08:38:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:41:39.072 08:38:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:41:39.072 08:38:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:41:39.072 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:39.072 08:38:12 -- nvmf/common.sh@161 -- # true 00:41:39.072 08:38:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:41:39.072 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:39.072 08:38:12 -- nvmf/common.sh@162 -- # true 00:41:39.072 08:38:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:41:39.072 08:38:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:41:39.072 08:38:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:41:39.072 08:38:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:41:39.072 08:38:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:41:39.072 08:38:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:41:39.072 08:38:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:41:39.072 08:38:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:41:39.072 08:38:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:41:39.072 08:38:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:41:39.072 08:38:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:41:39.072 08:38:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:41:39.072 08:38:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:41:39.072 08:38:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:41:39.072 08:38:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:41:39.072 08:38:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:41:39.072 08:38:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:41:39.331 08:38:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:41:39.332 08:38:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:41:39.332 08:38:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:41:39.332 08:38:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:41:39.332 08:38:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:41:39.332 08:38:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:41:39.332 08:38:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:41:39.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:39.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:41:39.332 00:41:39.332 --- 10.0.0.2 ping statistics --- 00:41:39.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:39.332 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:41:39.332 08:38:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:41:39.332 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:41:39.332 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:41:39.332 00:41:39.332 --- 10.0.0.3 ping statistics --- 00:41:39.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:39.332 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:41:39.332 08:38:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:41:39.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:39.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:41:39.332 00:41:39.332 --- 10.0.0.1 ping statistics --- 00:41:39.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:39.332 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:41:39.332 08:38:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:39.332 08:38:12 -- nvmf/common.sh@421 -- # return 0 00:41:39.332 08:38:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:41:39.332 08:38:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:39.332 08:38:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:41:39.332 08:38:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:41:39.332 08:38:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:39.332 08:38:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:41:39.332 08:38:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:41:39.332 08:38:12 -- host/fio.sh@14 -- # [[ y != y ]] 00:41:39.332 08:38:12 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:41:39.332 08:38:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:41:39.332 08:38:12 -- common/autotest_common.sh@10 -- # set +x 00:41:39.332 08:38:12 -- host/fio.sh@22 -- # nvmfpid=82259 00:41:39.332 08:38:12 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:39.332 08:38:12 -- host/fio.sh@21 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:41:39.332 08:38:12 -- host/fio.sh@26 -- # waitforlisten 82259 00:41:39.332 08:38:12 -- common/autotest_common.sh@819 -- # '[' -z 82259 ']' 00:41:39.332 08:38:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:39.332 08:38:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:41:39.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:39.332 08:38:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:39.332 08:38:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:41:39.332 08:38:12 -- common/autotest_common.sh@10 -- # set +x 00:41:39.332 [2024-04-17 08:38:12.515383] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:41:39.332 [2024-04-17 08:38:12.515477] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:39.332 [2024-04-17 08:38:12.648031] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:39.591 [2024-04-17 08:38:12.748653] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:41:39.591 [2024-04-17 08:38:12.748774] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:39.591 [2024-04-17 08:38:12.748781] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:39.591 [2024-04-17 08:38:12.748786] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:39.591 [2024-04-17 08:38:12.749002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:41:39.591 [2024-04-17 08:38:12.749115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:41:39.591 [2024-04-17 08:38:12.749202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:39.591 [2024-04-17 08:38:12.749205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:41:40.159 08:38:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:41:40.159 08:38:13 -- common/autotest_common.sh@852 -- # return 0 00:41:40.159 08:38:13 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:40.159 08:38:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:40.159 08:38:13 -- common/autotest_common.sh@10 -- # set +x 00:41:40.159 [2024-04-17 08:38:13.451753] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:40.159 08:38:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:40.159 08:38:13 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:41:40.159 08:38:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:41:40.159 08:38:13 -- common/autotest_common.sh@10 -- # set +x 00:41:40.418 08:38:13 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:41:40.418 08:38:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:40.418 08:38:13 -- common/autotest_common.sh@10 -- # set +x 00:41:40.418 Malloc1 00:41:40.418 08:38:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:40.418 08:38:13 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:40.418 08:38:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:40.418 08:38:13 -- common/autotest_common.sh@10 -- # set +x 00:41:40.418 08:38:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:40.418 08:38:13 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:41:40.418 08:38:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:40.418 08:38:13 -- common/autotest_common.sh@10 -- # set +x 00:41:40.418 08:38:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:40.418 08:38:13 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:40.418 08:38:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:40.418 08:38:13 -- common/autotest_common.sh@10 -- # set +x 00:41:40.418 [2024-04-17 08:38:13.573378] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:40.418 08:38:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:40.418 08:38:13 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:40.418 08:38:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:40.418 08:38:13 -- common/autotest_common.sh@10 -- # set +x 00:41:40.418 08:38:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:40.418 08:38:13 -- host/fio.sh@36 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:41:40.418 08:38:13 -- host/fio.sh@39 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:41:40.418 08:38:13 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:41:40.418 08:38:13 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:41:40.418 08:38:13 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:40.418 08:38:13 -- common/autotest_common.sh@1318 -- # local sanitizers 00:41:40.418 08:38:13 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:41:40.418 08:38:13 -- common/autotest_common.sh@1320 -- # shift 00:41:40.418 08:38:13 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:41:40.418 08:38:13 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:41:40.418 08:38:13 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:41:40.418 08:38:13 -- common/autotest_common.sh@1324 -- # grep libasan 00:41:40.418 08:38:13 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:41:40.418 08:38:13 -- common/autotest_common.sh@1324 -- # asan_lib= 00:41:40.418 08:38:13 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:41:40.418 08:38:13 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:41:40.418 08:38:13 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:41:40.418 08:38:13 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:41:40.418 08:38:13 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:41:40.418 08:38:13 -- common/autotest_common.sh@1324 -- # asan_lib= 00:41:40.418 08:38:13 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:41:40.418 08:38:13 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:41:40.418 08:38:13 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:41:40.678 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:41:40.678 fio-3.35 00:41:40.678 Starting 1 thread 00:41:43.212 00:41:43.212 test: (groupid=0, jobs=1): err= 0: pid=82344: Wed Apr 17 08:38:16 2024 00:41:43.212 read: IOPS=10.0k, BW=39.1MiB/s (41.0MB/s)(78.5MiB/2006msec) 00:41:43.212 slat (nsec): min=1751, max=192695, avg=2047.65, stdev=1815.90 00:41:43.212 clat (usec): min=2087, max=11967, avg=6747.07, stdev=580.25 00:41:43.212 lat (usec): min=2112, max=11969, avg=6749.12, stdev=580.12 00:41:43.212 clat percentiles (usec): 00:41:43.212 | 1.00th=[ 5538], 5.00th=[ 5932], 10.00th=[ 6128], 20.00th=[ 6325], 00:41:43.212 | 30.00th=[ 6456], 40.00th=[ 6587], 50.00th=[ 6718], 60.00th=[ 6849], 00:41:43.212 | 70.00th=[ 6980], 80.00th=[ 7177], 90.00th=[ 7504], 95.00th=[ 7767], 00:41:43.212 | 99.00th=[ 8291], 99.50th=[ 8586], 99.90th=[ 9503], 99.95th=[10814], 00:41:43.212 | 99.99th=[11207] 00:41:43.212 bw ( KiB/s): min=38960, max=40696, per=99.96%, avg=40036.00, stdev=766.62, samples=4 00:41:43.212 iops : min= 9740, max=10174, avg=10009.00, stdev=191.66, samples=4 00:41:43.212 write: IOPS=10.0k, BW=39.2MiB/s (41.1MB/s)(78.6MiB/2006msec); 0 zone resets 00:41:43.212 slat (nsec): min=1828, max=127122, avg=2201.70, stdev=1070.49 00:41:43.212 clat (usec): min=1325, max=11810, avg=6007.64, stdev=486.98 00:41:43.212 lat (usec): min=1333, max=11812, avg=6009.84, stdev=486.91 00:41:43.212 clat percentiles (usec): 00:41:43.212 | 1.00th=[ 4948], 5.00th=[ 5342], 10.00th=[ 5473], 20.00th=[ 5669], 00:41:43.212 | 30.00th=[ 5800], 40.00th=[ 5932], 50.00th=[ 5997], 60.00th=[ 6128], 00:41:43.212 | 70.00th=[ 6194], 80.00th=[ 6325], 90.00th=[ 6521], 95.00th=[ 6718], 00:41:43.212 | 99.00th=[ 7373], 99.50th=[ 7570], 99.90th=[ 8979], 99.95th=[ 9372], 00:41:43.212 | 99.99th=[11076] 00:41:43.212 bw ( KiB/s): min=39296, max=40984, per=100.00%, avg=40102.00, stdev=692.49, samples=4 00:41:43.212 iops : min= 9824, max=10246, avg=10025.50, stdev=173.12, samples=4 00:41:43.212 lat (msec) : 2=0.03%, 4=0.15%, 10=99.77%, 20=0.05% 00:41:43.212 cpu : usr=73.02%, sys=20.10%, ctx=4, majf=0, minf=5 00:41:43.212 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:41:43.212 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:43.212 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:43.212 issued rwts: total=20087,20109,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:43.212 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:43.212 00:41:43.212 Run status group 0 (all jobs): 00:41:43.212 READ: bw=39.1MiB/s (41.0MB/s), 39.1MiB/s-39.1MiB/s (41.0MB/s-41.0MB/s), io=78.5MiB (82.3MB), run=2006-2006msec 00:41:43.212 WRITE: bw=39.2MiB/s (41.1MB/s), 39.2MiB/s-39.2MiB/s (41.1MB/s-41.1MB/s), io=78.6MiB (82.4MB), run=2006-2006msec 00:41:43.212 08:38:16 -- host/fio.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:41:43.212 08:38:16 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:41:43.212 08:38:16 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:41:43.212 08:38:16 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:43.212 08:38:16 -- common/autotest_common.sh@1318 -- # local sanitizers 00:41:43.212 08:38:16 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:41:43.212 08:38:16 -- common/autotest_common.sh@1320 -- # shift 00:41:43.212 08:38:16 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:41:43.212 08:38:16 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:41:43.212 08:38:16 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:41:43.212 08:38:16 -- common/autotest_common.sh@1324 -- # grep libasan 00:41:43.212 08:38:16 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:41:43.212 08:38:16 -- common/autotest_common.sh@1324 -- # asan_lib= 00:41:43.212 08:38:16 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:41:43.212 08:38:16 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:41:43.212 08:38:16 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:41:43.212 08:38:16 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:41:43.212 08:38:16 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:41:43.212 08:38:16 -- common/autotest_common.sh@1324 -- # asan_lib= 00:41:43.212 08:38:16 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:41:43.212 08:38:16 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:41:43.212 08:38:16 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:41:43.212 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:41:43.212 fio-3.35 00:41:43.212 Starting 1 thread 00:41:45.749 00:41:45.749 test: (groupid=0, jobs=1): err= 0: pid=82387: Wed Apr 17 08:38:18 2024 00:41:45.749 read: IOPS=8620, BW=135MiB/s (141MB/s)(271MiB/2010msec) 00:41:45.749 slat (usec): min=2, max=112, avg= 3.56, stdev= 1.75 00:41:45.749 clat (usec): min=2041, max=22412, avg=8820.30, stdev=2520.44 00:41:45.749 lat (usec): min=2044, max=22417, avg=8823.86, stdev=2520.86 00:41:45.749 clat percentiles (usec): 00:41:45.749 | 1.00th=[ 4621], 5.00th=[ 5538], 10.00th=[ 5997], 20.00th=[ 6783], 00:41:45.749 | 30.00th=[ 7373], 40.00th=[ 8029], 50.00th=[ 8586], 60.00th=[ 9110], 00:41:45.749 | 70.00th=[ 9765], 80.00th=[10421], 90.00th=[11469], 95.00th=[13173], 00:41:45.749 | 99.00th=[18220], 99.50th=[20317], 99.90th=[21890], 99.95th=[22152], 00:41:45.749 | 99.99th=[22414] 00:41:45.749 bw ( KiB/s): min=62112, max=83392, per=51.70%, avg=71312.00, stdev=9091.85, samples=4 00:41:45.749 iops : min= 3882, max= 5212, avg=4457.00, stdev=568.24, samples=4 00:41:45.749 write: IOPS=5135, BW=80.2MiB/s (84.1MB/s)(146MiB/1814msec); 0 zone resets 00:41:45.749 slat (usec): min=29, max=292, avg=38.08, stdev= 4.95 00:41:45.749 clat (usec): min=4885, max=23316, avg=10423.43, stdev=2131.52 00:41:45.749 lat (usec): min=4923, max=23367, avg=10461.51, stdev=2132.74 00:41:45.749 clat percentiles (usec): 00:41:45.749 | 1.00th=[ 7046], 5.00th=[ 7767], 10.00th=[ 8160], 20.00th=[ 8848], 00:41:45.749 | 30.00th=[ 9241], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10421], 00:41:45.749 | 70.00th=[11076], 80.00th=[11863], 90.00th=[13042], 95.00th=[14222], 00:41:45.749 | 99.00th=[17433], 99.50th=[20055], 99.90th=[22938], 99.95th=[23200], 00:41:45.749 | 99.99th=[23200] 00:41:45.749 bw ( KiB/s): min=66368, max=85888, per=90.34%, avg=74232.00, stdev=8377.34, samples=4 00:41:45.749 iops : min= 4148, max= 5368, avg=4639.50, stdev=523.58, samples=4 00:41:45.749 lat (msec) : 4=0.25%, 10=64.77%, 20=34.40%, 50=0.59% 00:41:45.749 cpu : usr=72.03%, sys=18.22%, ctx=3, majf=0, minf=23 00:41:45.749 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:41:45.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:45.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:45.749 issued rwts: total=17328,9316,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:45.749 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:45.749 00:41:45.749 Run status group 0 (all jobs): 00:41:45.749 READ: bw=135MiB/s (141MB/s), 135MiB/s-135MiB/s (141MB/s-141MB/s), io=271MiB (284MB), run=2010-2010msec 00:41:45.749 WRITE: bw=80.2MiB/s (84.1MB/s), 80.2MiB/s-80.2MiB/s (84.1MB/s-84.1MB/s), io=146MiB (153MB), run=1814-1814msec 00:41:45.749 08:38:18 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:45.749 08:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:45.749 08:38:18 -- common/autotest_common.sh@10 -- # set +x 00:41:45.749 08:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:45.749 08:38:18 -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:41:45.749 08:38:18 -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:41:45.749 08:38:18 -- host/fio.sh@49 -- # get_nvme_bdfs 00:41:45.749 08:38:18 -- common/autotest_common.sh@1498 -- # bdfs=() 00:41:45.749 08:38:18 -- common/autotest_common.sh@1498 -- # local bdfs 00:41:45.749 08:38:18 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:41:45.749 08:38:18 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:41:45.749 08:38:18 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:41:45.749 08:38:18 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:41:45.749 08:38:18 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:41:45.749 08:38:18 -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:41:45.749 08:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:45.749 08:38:18 -- common/autotest_common.sh@10 -- # set +x 00:41:45.749 Nvme0n1 00:41:45.749 08:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:45.749 08:38:18 -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:41:45.749 08:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:45.749 08:38:18 -- common/autotest_common.sh@10 -- # set +x 00:41:45.749 08:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:45.749 08:38:18 -- host/fio.sh@51 -- # ls_guid=83636728-9d1d-4dfe-9e1d-a7ca980e1c13 00:41:45.749 08:38:18 -- host/fio.sh@52 -- # get_lvs_free_mb 83636728-9d1d-4dfe-9e1d-a7ca980e1c13 00:41:45.749 08:38:18 -- common/autotest_common.sh@1343 -- # local lvs_uuid=83636728-9d1d-4dfe-9e1d-a7ca980e1c13 00:41:45.749 08:38:18 -- common/autotest_common.sh@1344 -- # local lvs_info 00:41:45.749 08:38:18 -- common/autotest_common.sh@1345 -- # local fc 00:41:45.749 08:38:18 -- common/autotest_common.sh@1346 -- # local cs 00:41:45.749 08:38:18 -- common/autotest_common.sh@1347 -- # rpc_cmd bdev_lvol_get_lvstores 00:41:45.749 08:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:45.749 08:38:18 -- common/autotest_common.sh@10 -- # set +x 00:41:45.749 08:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:45.749 08:38:18 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:41:45.749 { 00:41:45.749 "base_bdev": "Nvme0n1", 00:41:45.749 "block_size": 4096, 00:41:45.749 "cluster_size": 1073741824, 00:41:45.749 "free_clusters": 4, 00:41:45.749 "name": "lvs_0", 00:41:45.749 "total_data_clusters": 4, 00:41:45.749 "uuid": "83636728-9d1d-4dfe-9e1d-a7ca980e1c13" 00:41:45.749 } 00:41:45.749 ]' 00:41:45.749 08:38:18 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="83636728-9d1d-4dfe-9e1d-a7ca980e1c13") .free_clusters' 00:41:45.749 08:38:18 -- common/autotest_common.sh@1348 -- # fc=4 00:41:45.749 08:38:18 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="83636728-9d1d-4dfe-9e1d-a7ca980e1c13") .cluster_size' 00:41:45.749 08:38:18 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:41:45.749 08:38:18 -- common/autotest_common.sh@1352 -- # free_mb=4096 00:41:45.749 4096 00:41:45.749 08:38:18 -- common/autotest_common.sh@1353 -- # echo 4096 00:41:45.749 08:38:18 -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 4096 00:41:45.749 08:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:45.749 08:38:18 -- common/autotest_common.sh@10 -- # set +x 00:41:45.749 adc04b15-02d5-4a38-a5ba-6ef88d459826 00:41:45.749 08:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:45.749 08:38:18 -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:41:45.749 08:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:45.749 08:38:18 -- common/autotest_common.sh@10 -- # set +x 00:41:45.749 08:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:45.749 08:38:18 -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:41:45.749 08:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:45.749 08:38:18 -- common/autotest_common.sh@10 -- # set +x 00:41:45.749 08:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:45.749 08:38:18 -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:41:45.749 08:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:45.749 08:38:18 -- common/autotest_common.sh@10 -- # set +x 00:41:45.749 08:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:45.749 08:38:18 -- host/fio.sh@57 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:41:45.749 08:38:18 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:41:45.749 08:38:18 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:41:45.749 08:38:18 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:45.749 08:38:18 -- common/autotest_common.sh@1318 -- # local sanitizers 00:41:45.749 08:38:18 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:41:45.749 08:38:18 -- common/autotest_common.sh@1320 -- # shift 00:41:45.749 08:38:18 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:41:45.749 08:38:18 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:41:45.749 08:38:18 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:41:45.749 08:38:18 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:41:45.750 08:38:18 -- common/autotest_common.sh@1324 -- # grep libasan 00:41:45.750 08:38:18 -- common/autotest_common.sh@1324 -- # asan_lib= 00:41:45.750 08:38:18 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:41:45.750 08:38:18 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:41:45.750 08:38:18 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:41:45.750 08:38:18 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:41:45.750 08:38:18 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:41:45.750 08:38:18 -- common/autotest_common.sh@1324 -- # asan_lib= 00:41:45.750 08:38:18 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:41:45.750 08:38:18 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:41:45.750 08:38:18 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:41:45.750 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:41:45.750 fio-3.35 00:41:45.750 Starting 1 thread 00:41:48.283 00:41:48.283 test: (groupid=0, jobs=1): err= 0: pid=82466: Wed Apr 17 08:38:21 2024 00:41:48.283 read: IOPS=7217, BW=28.2MiB/s (29.6MB/s)(56.6MiB/2007msec) 00:41:48.283 slat (nsec): min=1789, max=218241, avg=2049.60, stdev=2175.45 00:41:48.283 clat (usec): min=3583, max=16775, avg=9417.80, stdev=900.43 00:41:48.283 lat (usec): min=3588, max=16777, avg=9419.85, stdev=900.34 00:41:48.283 clat percentiles (usec): 00:41:48.283 | 1.00th=[ 7504], 5.00th=[ 8094], 10.00th=[ 8455], 20.00th=[ 8717], 00:41:48.283 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9634], 00:41:48.283 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10552], 95.00th=[10814], 00:41:48.283 | 99.00th=[11600], 99.50th=[12125], 99.90th=[15533], 99.95th=[16581], 00:41:48.283 | 99.99th=[16712] 00:41:48.283 bw ( KiB/s): min=27544, max=29608, per=99.77%, avg=28804.00, stdev=897.77, samples=4 00:41:48.283 iops : min= 6886, max= 7402, avg=7201.00, stdev=224.44, samples=4 00:41:48.283 write: IOPS=7184, BW=28.1MiB/s (29.4MB/s)(56.3MiB/2007msec); 0 zone resets 00:41:48.283 slat (nsec): min=1845, max=137309, avg=2139.04, stdev=1297.32 00:41:48.283 clat (usec): min=1622, max=14285, avg=8294.69, stdev=740.85 00:41:48.283 lat (usec): min=1630, max=14287, avg=8296.82, stdev=740.84 00:41:48.283 clat percentiles (usec): 00:41:48.283 | 1.00th=[ 6521], 5.00th=[ 7177], 10.00th=[ 7439], 20.00th=[ 7701], 00:41:48.283 | 30.00th=[ 7963], 40.00th=[ 8094], 50.00th=[ 8291], 60.00th=[ 8455], 00:41:48.283 | 70.00th=[ 8586], 80.00th=[ 8848], 90.00th=[ 9241], 95.00th=[ 9503], 00:41:48.283 | 99.00th=[10028], 99.50th=[10290], 99.90th=[11338], 99.95th=[12649], 00:41:48.283 | 99.99th=[14222] 00:41:48.283 bw ( KiB/s): min=28488, max=28928, per=100.00%, avg=28754.00, stdev=214.50, samples=4 00:41:48.283 iops : min= 7122, max= 7232, avg=7188.50, stdev=53.63, samples=4 00:41:48.283 lat (msec) : 2=0.01%, 4=0.09%, 10=87.63%, 20=12.27% 00:41:48.283 cpu : usr=77.02%, sys=18.00%, ctx=2, majf=0, minf=24 00:41:48.283 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:41:48.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.283 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:48.283 issued rwts: total=14486,14420,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.283 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:48.283 00:41:48.283 Run status group 0 (all jobs): 00:41:48.283 READ: bw=28.2MiB/s (29.6MB/s), 28.2MiB/s-28.2MiB/s (29.6MB/s-29.6MB/s), io=56.6MiB (59.3MB), run=2007-2007msec 00:41:48.283 WRITE: bw=28.1MiB/s (29.4MB/s), 28.1MiB/s-28.1MiB/s (29.4MB/s-29.4MB/s), io=56.3MiB (59.1MB), run=2007-2007msec 00:41:48.283 08:38:21 -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:41:48.283 08:38:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:48.283 08:38:21 -- common/autotest_common.sh@10 -- # set +x 00:41:48.283 08:38:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:48.283 08:38:21 -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:41:48.283 08:38:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:48.283 08:38:21 -- common/autotest_common.sh@10 -- # set +x 00:41:48.283 08:38:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:48.283 08:38:21 -- host/fio.sh@62 -- # ls_nested_guid=aaa34e52-6537-4a61-aa7c-e989e9f74a2e 00:41:48.283 08:38:21 -- host/fio.sh@63 -- # get_lvs_free_mb aaa34e52-6537-4a61-aa7c-e989e9f74a2e 00:41:48.283 08:38:21 -- common/autotest_common.sh@1343 -- # local lvs_uuid=aaa34e52-6537-4a61-aa7c-e989e9f74a2e 00:41:48.283 08:38:21 -- common/autotest_common.sh@1344 -- # local lvs_info 00:41:48.283 08:38:21 -- common/autotest_common.sh@1345 -- # local fc 00:41:48.283 08:38:21 -- common/autotest_common.sh@1346 -- # local cs 00:41:48.283 08:38:21 -- common/autotest_common.sh@1347 -- # rpc_cmd bdev_lvol_get_lvstores 00:41:48.283 08:38:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:48.283 08:38:21 -- common/autotest_common.sh@10 -- # set +x 00:41:48.283 08:38:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:48.283 08:38:21 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:41:48.283 { 00:41:48.283 "base_bdev": "Nvme0n1", 00:41:48.283 "block_size": 4096, 00:41:48.283 "cluster_size": 1073741824, 00:41:48.283 "free_clusters": 0, 00:41:48.283 "name": "lvs_0", 00:41:48.283 "total_data_clusters": 4, 00:41:48.283 "uuid": "83636728-9d1d-4dfe-9e1d-a7ca980e1c13" 00:41:48.283 }, 00:41:48.283 { 00:41:48.283 "base_bdev": "adc04b15-02d5-4a38-a5ba-6ef88d459826", 00:41:48.283 "block_size": 4096, 00:41:48.283 "cluster_size": 4194304, 00:41:48.283 "free_clusters": 1022, 00:41:48.283 "name": "lvs_n_0", 00:41:48.283 "total_data_clusters": 1022, 00:41:48.283 "uuid": "aaa34e52-6537-4a61-aa7c-e989e9f74a2e" 00:41:48.283 } 00:41:48.283 ]' 00:41:48.283 08:38:21 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="aaa34e52-6537-4a61-aa7c-e989e9f74a2e") .free_clusters' 00:41:48.283 08:38:21 -- common/autotest_common.sh@1348 -- # fc=1022 00:41:48.283 08:38:21 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="aaa34e52-6537-4a61-aa7c-e989e9f74a2e") .cluster_size' 00:41:48.283 08:38:21 -- common/autotest_common.sh@1349 -- # cs=4194304 00:41:48.283 08:38:21 -- common/autotest_common.sh@1352 -- # free_mb=4088 00:41:48.283 4088 00:41:48.283 08:38:21 -- common/autotest_common.sh@1353 -- # echo 4088 00:41:48.283 08:38:21 -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:41:48.283 08:38:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:48.283 08:38:21 -- common/autotest_common.sh@10 -- # set +x 00:41:48.283 01e511cd-4220-4a27-a974-33f8d04d5691 00:41:48.283 08:38:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:48.283 08:38:21 -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:41:48.283 08:38:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:48.283 08:38:21 -- common/autotest_common.sh@10 -- # set +x 00:41:48.283 08:38:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:48.283 08:38:21 -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:41:48.283 08:38:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:48.283 08:38:21 -- common/autotest_common.sh@10 -- # set +x 00:41:48.283 08:38:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:48.283 08:38:21 -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:41:48.283 08:38:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:48.283 08:38:21 -- common/autotest_common.sh@10 -- # set +x 00:41:48.283 08:38:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:48.283 08:38:21 -- host/fio.sh@68 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:41:48.283 08:38:21 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:41:48.283 08:38:21 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:41:48.283 08:38:21 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:48.283 08:38:21 -- common/autotest_common.sh@1318 -- # local sanitizers 00:41:48.283 08:38:21 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:41:48.283 08:38:21 -- common/autotest_common.sh@1320 -- # shift 00:41:48.283 08:38:21 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:41:48.283 08:38:21 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:41:48.283 08:38:21 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:41:48.283 08:38:21 -- common/autotest_common.sh@1324 -- # grep libasan 00:41:48.283 08:38:21 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:41:48.283 08:38:21 -- common/autotest_common.sh@1324 -- # asan_lib= 00:41:48.283 08:38:21 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:41:48.283 08:38:21 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:41:48.283 08:38:21 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:41:48.283 08:38:21 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:41:48.283 08:38:21 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:41:48.283 08:38:21 -- common/autotest_common.sh@1324 -- # asan_lib= 00:41:48.283 08:38:21 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:41:48.284 08:38:21 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:41:48.284 08:38:21 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:41:48.284 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:41:48.284 fio-3.35 00:41:48.284 Starting 1 thread 00:41:50.812 00:41:50.812 test: (groupid=0, jobs=1): err= 0: pid=82515: Wed Apr 17 08:38:23 2024 00:41:50.812 read: IOPS=6424, BW=25.1MiB/s (26.3MB/s)(50.4MiB/2008msec) 00:41:50.812 slat (nsec): min=1598, max=436605, avg=2199.11, stdev=4919.55 00:41:50.812 clat (usec): min=4344, max=19356, avg=10548.53, stdev=948.43 00:41:50.812 lat (usec): min=4358, max=19358, avg=10550.73, stdev=948.08 00:41:50.812 clat percentiles (usec): 00:41:50.812 | 1.00th=[ 8455], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[ 9765], 00:41:50.812 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10552], 60.00th=[10683], 00:41:50.812 | 70.00th=[10945], 80.00th=[11338], 90.00th=[11731], 95.00th=[12125], 00:41:50.812 | 99.00th=[12780], 99.50th=[13042], 99.90th=[16450], 99.95th=[16909], 00:41:50.812 | 99.99th=[19268] 00:41:50.812 bw ( KiB/s): min=24736, max=26240, per=99.90%, avg=25674.00, stdev=652.85, samples=4 00:41:50.812 iops : min= 6184, max= 6560, avg=6418.50, stdev=163.21, samples=4 00:41:50.812 write: IOPS=6430, BW=25.1MiB/s (26.3MB/s)(50.4MiB/2008msec); 0 zone resets 00:41:50.812 slat (nsec): min=1643, max=306222, avg=2284.99, stdev=3074.63 00:41:50.812 clat (usec): min=3252, max=16791, avg=9265.81, stdev=824.91 00:41:50.812 lat (usec): min=3269, max=16793, avg=9268.10, stdev=824.67 00:41:50.812 clat percentiles (usec): 00:41:50.812 | 1.00th=[ 7439], 5.00th=[ 8029], 10.00th=[ 8291], 20.00th=[ 8586], 00:41:50.812 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9503], 00:41:50.812 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10290], 95.00th=[10552], 00:41:50.812 | 99.00th=[11076], 99.50th=[11338], 99.90th=[15533], 99.95th=[15926], 00:41:50.812 | 99.99th=[16712] 00:41:50.812 bw ( KiB/s): min=25408, max=25928, per=99.91%, avg=25698.00, stdev=239.14, samples=4 00:41:50.812 iops : min= 6352, max= 6482, avg=6424.50, stdev=59.79, samples=4 00:41:50.812 lat (msec) : 4=0.02%, 10=55.46%, 20=44.51% 00:41:50.812 cpu : usr=76.63%, sys=18.78%, ctx=3, majf=0, minf=24 00:41:50.812 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:41:50.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:50.812 issued rwts: total=12901,12912,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.812 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:50.812 00:41:50.812 Run status group 0 (all jobs): 00:41:50.812 READ: bw=25.1MiB/s (26.3MB/s), 25.1MiB/s-25.1MiB/s (26.3MB/s-26.3MB/s), io=50.4MiB (52.8MB), run=2008-2008msec 00:41:50.812 WRITE: bw=25.1MiB/s (26.3MB/s), 25.1MiB/s-25.1MiB/s (26.3MB/s-26.3MB/s), io=50.4MiB (52.9MB), run=2008-2008msec 00:41:50.812 08:38:23 -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:41:50.812 08:38:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:50.812 08:38:23 -- common/autotest_common.sh@10 -- # set +x 00:41:50.812 08:38:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:50.812 08:38:23 -- host/fio.sh@72 -- # sync 00:41:50.812 08:38:23 -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:41:50.812 08:38:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:50.812 08:38:23 -- common/autotest_common.sh@10 -- # set +x 00:41:50.812 08:38:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:50.812 08:38:23 -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:41:50.812 08:38:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:50.812 08:38:23 -- common/autotest_common.sh@10 -- # set +x 00:41:50.812 08:38:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:50.812 08:38:23 -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:41:50.812 08:38:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:50.812 08:38:23 -- common/autotest_common.sh@10 -- # set +x 00:41:50.812 08:38:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:50.812 08:38:23 -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:41:50.812 08:38:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:50.812 08:38:23 -- common/autotest_common.sh@10 -- # set +x 00:41:50.812 08:38:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:50.812 08:38:23 -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:41:50.812 08:38:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:41:50.812 08:38:23 -- common/autotest_common.sh@10 -- # set +x 00:41:53.349 08:38:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:41:53.349 08:38:26 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:41:53.349 08:38:26 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:41:53.349 08:38:26 -- host/fio.sh@84 -- # nvmftestfini 00:41:53.349 08:38:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:41:53.349 08:38:26 -- nvmf/common.sh@116 -- # sync 00:41:53.349 08:38:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:41:53.349 08:38:26 -- nvmf/common.sh@119 -- # set +e 00:41:53.349 08:38:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:41:53.349 08:38:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:41:53.349 rmmod nvme_tcp 00:41:53.349 rmmod nvme_fabrics 00:41:53.349 rmmod nvme_keyring 00:41:53.349 08:38:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:41:53.350 08:38:26 -- nvmf/common.sh@123 -- # set -e 00:41:53.350 08:38:26 -- nvmf/common.sh@124 -- # return 0 00:41:53.350 08:38:26 -- nvmf/common.sh@477 -- # '[' -n 82259 ']' 00:41:53.350 08:38:26 -- nvmf/common.sh@478 -- # killprocess 82259 00:41:53.350 08:38:26 -- common/autotest_common.sh@926 -- # '[' -z 82259 ']' 00:41:53.350 08:38:26 -- common/autotest_common.sh@930 -- # kill -0 82259 00:41:53.350 08:38:26 -- common/autotest_common.sh@931 -- # uname 00:41:53.350 08:38:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:41:53.350 08:38:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82259 00:41:53.350 08:38:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:41:53.350 08:38:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:41:53.350 killing process with pid 82259 00:41:53.350 08:38:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82259' 00:41:53.350 08:38:26 -- common/autotest_common.sh@945 -- # kill 82259 00:41:53.350 08:38:26 -- common/autotest_common.sh@950 -- # wait 82259 00:41:53.350 08:38:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:41:53.350 08:38:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:41:53.350 08:38:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:41:53.350 08:38:26 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:41:53.350 08:38:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:41:53.350 08:38:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:53.350 08:38:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:53.350 08:38:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:53.350 08:38:26 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:41:53.350 00:41:53.350 real 0m14.522s 00:41:53.350 user 1m0.662s 00:41:53.350 sys 0m3.157s 00:41:53.350 08:38:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:53.350 ************************************ 00:41:53.350 END TEST nvmf_fio_host 00:41:53.350 ************************************ 00:41:53.350 08:38:26 -- common/autotest_common.sh@10 -- # set +x 00:41:53.350 08:38:26 -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:41:53.350 08:38:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:41:53.350 08:38:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:41:53.350 08:38:26 -- common/autotest_common.sh@10 -- # set +x 00:41:53.350 ************************************ 00:41:53.350 START TEST nvmf_failover 00:41:53.350 ************************************ 00:41:53.350 08:38:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:41:53.350 * Looking for test storage... 00:41:53.350 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:41:53.350 08:38:26 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:41:53.350 08:38:26 -- nvmf/common.sh@7 -- # uname -s 00:41:53.350 08:38:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:53.350 08:38:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:53.350 08:38:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:53.350 08:38:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:53.350 08:38:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:53.350 08:38:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:53.350 08:38:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:53.350 08:38:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:53.350 08:38:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:53.350 08:38:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:53.350 08:38:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:41:53.350 08:38:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:41:53.350 08:38:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:53.350 08:38:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:53.350 08:38:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:41:53.610 08:38:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:53.610 08:38:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:53.610 08:38:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:53.610 08:38:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:53.610 08:38:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:53.610 08:38:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:53.610 08:38:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:53.610 08:38:26 -- paths/export.sh@5 -- # export PATH 00:41:53.610 08:38:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:53.610 08:38:26 -- nvmf/common.sh@46 -- # : 0 00:41:53.610 08:38:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:41:53.610 08:38:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:41:53.610 08:38:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:41:53.610 08:38:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:53.610 08:38:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:53.610 08:38:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:41:53.610 08:38:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:41:53.610 08:38:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:41:53.610 08:38:26 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:53.610 08:38:26 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:53.610 08:38:26 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:41:53.610 08:38:26 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:41:53.610 08:38:26 -- host/failover.sh@18 -- # nvmftestinit 00:41:53.610 08:38:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:41:53.610 08:38:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:53.610 08:38:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:41:53.610 08:38:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:41:53.610 08:38:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:41:53.610 08:38:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:53.610 08:38:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:53.610 08:38:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:53.610 08:38:26 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:41:53.610 08:38:26 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:41:53.610 08:38:26 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:41:53.610 08:38:26 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:41:53.610 08:38:26 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:41:53.610 08:38:26 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:41:53.610 08:38:26 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:53.610 08:38:26 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:53.610 08:38:26 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:41:53.610 08:38:26 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:41:53.610 08:38:26 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:41:53.610 08:38:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:41:53.610 08:38:26 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:41:53.610 08:38:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:53.610 08:38:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:41:53.610 08:38:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:41:53.610 08:38:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:41:53.610 08:38:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:41:53.610 08:38:26 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:41:53.610 08:38:26 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:41:53.610 Cannot find device "nvmf_tgt_br" 00:41:53.610 08:38:26 -- nvmf/common.sh@154 -- # true 00:41:53.610 08:38:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:41:53.610 Cannot find device "nvmf_tgt_br2" 00:41:53.610 08:38:26 -- nvmf/common.sh@155 -- # true 00:41:53.610 08:38:26 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:41:53.610 08:38:26 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:41:53.610 Cannot find device "nvmf_tgt_br" 00:41:53.610 08:38:26 -- nvmf/common.sh@157 -- # true 00:41:53.610 08:38:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:41:53.610 Cannot find device "nvmf_tgt_br2" 00:41:53.610 08:38:26 -- nvmf/common.sh@158 -- # true 00:41:53.610 08:38:26 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:41:53.610 08:38:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:41:53.610 08:38:26 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:41:53.610 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:53.610 08:38:26 -- nvmf/common.sh@161 -- # true 00:41:53.610 08:38:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:41:53.610 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:53.610 08:38:26 -- nvmf/common.sh@162 -- # true 00:41:53.610 08:38:26 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:41:53.610 08:38:26 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:41:53.610 08:38:26 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:41:53.610 08:38:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:41:53.610 08:38:26 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:41:53.610 08:38:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:41:53.870 08:38:26 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:41:53.870 08:38:26 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:41:53.870 08:38:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:41:53.870 08:38:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:41:53.870 08:38:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:41:53.870 08:38:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:41:53.870 08:38:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:41:53.870 08:38:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:41:53.870 08:38:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:41:53.871 08:38:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:41:53.871 08:38:27 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:41:53.871 08:38:27 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:41:53.871 08:38:27 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:41:53.871 08:38:27 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:41:53.871 08:38:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:41:53.871 08:38:27 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:41:53.871 08:38:27 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:41:53.871 08:38:27 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:41:53.871 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:53.871 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:41:53.871 00:41:53.871 --- 10.0.0.2 ping statistics --- 00:41:53.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:53.871 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:41:53.871 08:38:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:41:53.871 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:41:53.871 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:41:53.871 00:41:53.871 --- 10.0.0.3 ping statistics --- 00:41:53.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:53.871 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:41:53.871 08:38:27 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:41:53.871 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:53.871 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:41:53.871 00:41:53.871 --- 10.0.0.1 ping statistics --- 00:41:53.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:53.871 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:41:53.871 08:38:27 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:53.871 08:38:27 -- nvmf/common.sh@421 -- # return 0 00:41:53.871 08:38:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:41:53.871 08:38:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:53.871 08:38:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:41:53.871 08:38:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:41:53.871 08:38:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:53.871 08:38:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:41:53.871 08:38:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:41:53.871 08:38:27 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:41:53.871 08:38:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:41:53.871 08:38:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:41:53.871 08:38:27 -- common/autotest_common.sh@10 -- # set +x 00:41:53.871 08:38:27 -- nvmf/common.sh@469 -- # nvmfpid=82750 00:41:53.871 08:38:27 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:41:53.871 08:38:27 -- nvmf/common.sh@470 -- # waitforlisten 82750 00:41:53.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:53.871 08:38:27 -- common/autotest_common.sh@819 -- # '[' -z 82750 ']' 00:41:53.871 08:38:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:53.871 08:38:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:41:53.871 08:38:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:53.871 08:38:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:41:53.871 08:38:27 -- common/autotest_common.sh@10 -- # set +x 00:41:53.871 [2024-04-17 08:38:27.134527] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:41:53.871 [2024-04-17 08:38:27.134599] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:54.130 [2024-04-17 08:38:27.275424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:41:54.130 [2024-04-17 08:38:27.369761] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:41:54.130 [2024-04-17 08:38:27.369914] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:54.130 [2024-04-17 08:38:27.369921] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:54.130 [2024-04-17 08:38:27.369926] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:54.130 [2024-04-17 08:38:27.370208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:41:54.130 [2024-04-17 08:38:27.370095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:41:54.130 [2024-04-17 08:38:27.370211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:41:54.698 08:38:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:41:54.698 08:38:27 -- common/autotest_common.sh@852 -- # return 0 00:41:54.698 08:38:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:41:54.698 08:38:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:41:54.698 08:38:27 -- common/autotest_common.sh@10 -- # set +x 00:41:54.957 08:38:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:54.957 08:38:28 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:41:54.957 [2024-04-17 08:38:28.249898] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:54.957 08:38:28 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:41:55.215 Malloc0 00:41:55.215 08:38:28 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:55.472 08:38:28 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:55.730 08:38:28 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:55.988 [2024-04-17 08:38:29.179199] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:55.988 08:38:29 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:41:56.246 [2024-04-17 08:38:29.383001] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:41:56.246 08:38:29 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:41:56.530 [2024-04-17 08:38:29.582862] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:41:56.530 08:38:29 -- host/failover.sh@31 -- # bdevperf_pid=82862 00:41:56.530 08:38:29 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:41:56.530 08:38:29 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:56.531 08:38:29 -- host/failover.sh@34 -- # waitforlisten 82862 /var/tmp/bdevperf.sock 00:41:56.531 08:38:29 -- common/autotest_common.sh@819 -- # '[' -z 82862 ']' 00:41:56.531 08:38:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:41:56.531 08:38:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:41:56.531 08:38:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:41:56.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:41:56.531 08:38:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:41:56.531 08:38:29 -- common/autotest_common.sh@10 -- # set +x 00:41:57.465 08:38:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:41:57.465 08:38:30 -- common/autotest_common.sh@852 -- # return 0 00:41:57.465 08:38:30 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:41:57.725 NVMe0n1 00:41:57.725 08:38:30 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:41:57.984 00:41:57.984 08:38:31 -- host/failover.sh@39 -- # run_test_pid=82909 00:41:57.984 08:38:31 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:41:57.984 08:38:31 -- host/failover.sh@41 -- # sleep 1 00:41:58.921 08:38:32 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:59.182 [2024-04-17 08:38:32.337848] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.337897] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.337911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.337917] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.337922] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.337929] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.337934] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.337941] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.337948] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.337954] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.337959] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.337965] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.337971] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.337976] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.337982] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.337987] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.337993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.337999] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.338005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.338010] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.338016] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.338022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.338027] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.338033] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.338038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.338044] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.338049] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.338059] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.338065] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.338070] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.338103] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.338112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.338119] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.338124] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.338130] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.338148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.338154] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.338159] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.338168] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.338173] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.338179] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.182 [2024-04-17 08:38:32.338184] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.183 [2024-04-17 08:38:32.338192] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.183 [2024-04-17 08:38:32.338197] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.183 [2024-04-17 08:38:32.338203] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.183 [2024-04-17 08:38:32.338217] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.183 [2024-04-17 08:38:32.338223] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.183 [2024-04-17 08:38:32.338234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.183 [2024-04-17 08:38:32.338240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.183 [2024-04-17 08:38:32.338246] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.183 [2024-04-17 08:38:32.338251] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.183 [2024-04-17 08:38:32.338257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.183 [2024-04-17 08:38:32.338263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.183 [2024-04-17 08:38:32.338268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.183 [2024-04-17 08:38:32.338274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.183 [2024-04-17 08:38:32.338283] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.183 [2024-04-17 08:38:32.338289] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.183 [2024-04-17 08:38:32.338295] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.183 [2024-04-17 08:38:32.338304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.183 [2024-04-17 08:38:32.338311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.183 [2024-04-17 08:38:32.338320] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.183 [2024-04-17 08:38:32.338326] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.183 [2024-04-17 08:38:32.338331] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.183 [2024-04-17 08:38:32.338337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.183 [2024-04-17 08:38:32.338342] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.183 [2024-04-17 08:38:32.338348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.183 [2024-04-17 08:38:32.338355] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398c20 is same with the state(5) to be set 00:41:59.183 08:38:32 -- host/failover.sh@45 -- # sleep 3 00:42:02.493 08:38:35 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:42:02.493 00:42:02.493 08:38:35 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:42:02.753 [2024-04-17 08:38:35.870825] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.753 [2024-04-17 08:38:35.870873] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.753 [2024-04-17 08:38:35.870881] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.753 [2024-04-17 08:38:35.870886] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.753 [2024-04-17 08:38:35.870893] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.753 [2024-04-17 08:38:35.870898] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.753 [2024-04-17 08:38:35.870904] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.753 [2024-04-17 08:38:35.870910] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.753 [2024-04-17 08:38:35.870915] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.753 [2024-04-17 08:38:35.870920] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.753 [2024-04-17 08:38:35.870925] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.753 [2024-04-17 08:38:35.870931] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.753 [2024-04-17 08:38:35.870936] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.753 [2024-04-17 08:38:35.870942] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.753 [2024-04-17 08:38:35.870947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.753 [2024-04-17 08:38:35.870953] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.753 [2024-04-17 08:38:35.870959] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.753 [2024-04-17 08:38:35.870965] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.753 [2024-04-17 08:38:35.870971] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.753 [2024-04-17 08:38:35.870978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.753 [2024-04-17 08:38:35.871018] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.753 [2024-04-17 08:38:35.871024] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.753 [2024-04-17 08:38:35.871029] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.753 [2024-04-17 08:38:35.871035] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.753 [2024-04-17 08:38:35.871055] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.753 [2024-04-17 08:38:35.871062] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.753 [2024-04-17 08:38:35.871074] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.753 [2024-04-17 08:38:35.871083] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.753 [2024-04-17 08:38:35.871089] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.753 [2024-04-17 08:38:35.871095] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.753 [2024-04-17 08:38:35.871100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.753 [2024-04-17 08:38:35.871110] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.753 [2024-04-17 08:38:35.871122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.753 [2024-04-17 08:38:35.871131] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871137] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871143] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871156] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871167] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871187] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871193] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871204] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871229] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871246] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871251] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871261] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871305] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871346] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871382] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871404] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871416] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871421] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871432] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871438] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871459] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871469] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871475] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871486] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871492] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871511] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871517] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871527] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871537] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871552] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871564] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871570] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871589] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 [2024-04-17 08:38:35.871595] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1399b20 is same with the state(5) to be set 00:42:02.754 08:38:35 -- host/failover.sh@50 -- # sleep 3 00:42:06.042 08:38:38 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:06.042 [2024-04-17 08:38:39.128364] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:06.042 08:38:39 -- host/failover.sh@55 -- # sleep 1 00:42:06.976 08:38:40 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:42:07.236 [2024-04-17 08:38:40.370784] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154df80 is same with the state(5) to be set 00:42:07.236 [2024-04-17 08:38:40.370828] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154df80 is same with the state(5) to be set 00:42:07.236 [2024-04-17 08:38:40.370835] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154df80 is same with the state(5) to be set 00:42:07.236 [2024-04-17 08:38:40.370841] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154df80 is same with the state(5) to be set 00:42:07.236 [2024-04-17 08:38:40.370848] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154df80 is same with the state(5) to be set 00:42:07.236 [2024-04-17 08:38:40.370854] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154df80 is same with the state(5) to be set 00:42:07.236 [2024-04-17 08:38:40.370860] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154df80 is same with the state(5) to be set 00:42:07.236 [2024-04-17 08:38:40.370865] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154df80 is same with the state(5) to be set 00:42:07.236 [2024-04-17 08:38:40.370871] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154df80 is same with the state(5) to be set 00:42:07.236 [2024-04-17 08:38:40.370877] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154df80 is same with the state(5) to be set 00:42:07.236 [2024-04-17 08:38:40.370882] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154df80 is same with the state(5) to be set 00:42:07.236 [2024-04-17 08:38:40.370888] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154df80 is same with the state(5) to be set 00:42:07.236 [2024-04-17 08:38:40.370893] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154df80 is same with the state(5) to be set 00:42:07.236 [2024-04-17 08:38:40.370899] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154df80 is same with the state(5) to be set 00:42:07.236 [2024-04-17 08:38:40.370905] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154df80 is same with the state(5) to be set 00:42:07.236 [2024-04-17 08:38:40.370910] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154df80 is same with the state(5) to be set 00:42:07.236 [2024-04-17 08:38:40.370916] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154df80 is same with the state(5) to be set 00:42:07.236 [2024-04-17 08:38:40.370922] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154df80 is same with the state(5) to be set 00:42:07.236 [2024-04-17 08:38:40.370927] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154df80 is same with the state(5) to be set 00:42:07.237 [2024-04-17 08:38:40.370933] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154df80 is same with the state(5) to be set 00:42:07.237 [2024-04-17 08:38:40.370939] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154df80 is same with the state(5) to be set 00:42:07.237 [2024-04-17 08:38:40.370944] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154df80 is same with the state(5) to be set 00:42:07.237 [2024-04-17 08:38:40.370950] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154df80 is same with the state(5) to be set 00:42:07.237 [2024-04-17 08:38:40.370955] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154df80 is same with the state(5) to be set 00:42:07.237 [2024-04-17 08:38:40.370960] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154df80 is same with the state(5) to be set 00:42:07.237 08:38:40 -- host/failover.sh@59 -- # wait 82909 00:42:13.827 0 00:42:13.827 08:38:46 -- host/failover.sh@61 -- # killprocess 82862 00:42:13.827 08:38:46 -- common/autotest_common.sh@926 -- # '[' -z 82862 ']' 00:42:13.827 08:38:46 -- common/autotest_common.sh@930 -- # kill -0 82862 00:42:13.827 08:38:46 -- common/autotest_common.sh@931 -- # uname 00:42:13.827 08:38:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:42:13.827 08:38:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82862 00:42:13.827 08:38:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:42:13.827 killing process with pid 82862 00:42:13.827 08:38:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:42:13.827 08:38:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82862' 00:42:13.827 08:38:46 -- common/autotest_common.sh@945 -- # kill 82862 00:42:13.827 08:38:46 -- common/autotest_common.sh@950 -- # wait 82862 00:42:13.827 08:38:46 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:42:13.827 [2024-04-17 08:38:29.661026] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:42:13.827 [2024-04-17 08:38:29.661117] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82862 ] 00:42:13.827 [2024-04-17 08:38:29.802203] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:13.827 [2024-04-17 08:38:29.909234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:42:13.827 Running I/O for 15 seconds... 00:42:13.827 [2024-04-17 08:38:32.338583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:122584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.827 [2024-04-17 08:38:32.338648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.827 [2024-04-17 08:38:32.338670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:121888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.827 [2024-04-17 08:38:32.338681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.827 [2024-04-17 08:38:32.338695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:121904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.827 [2024-04-17 08:38:32.338706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.827 [2024-04-17 08:38:32.338718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:121912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.827 [2024-04-17 08:38:32.338727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.827 [2024-04-17 08:38:32.338739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:121944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.827 [2024-04-17 08:38:32.338750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.827 [2024-04-17 08:38:32.338761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.827 [2024-04-17 08:38:32.338771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.827 [2024-04-17 08:38:32.338783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:121960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.827 [2024-04-17 08:38:32.338794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.827 [2024-04-17 08:38:32.338806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:121968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.827 [2024-04-17 08:38:32.338816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.827 [2024-04-17 08:38:32.338828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:121976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.827 [2024-04-17 08:38:32.338838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.827 [2024-04-17 08:38:32.338850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:121984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.827 [2024-04-17 08:38:32.338860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.827 [2024-04-17 08:38:32.338872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:121992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.827 [2024-04-17 08:38:32.338882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.827 [2024-04-17 08:38:32.338907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.827 [2024-04-17 08:38:32.338917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.827 [2024-04-17 08:38:32.338928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:122016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.827 [2024-04-17 08:38:32.338938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.827 [2024-04-17 08:38:32.338950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:122024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.827 [2024-04-17 08:38:32.338961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.827 [2024-04-17 08:38:32.338972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:122040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.827 [2024-04-17 08:38:32.338982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.827 [2024-04-17 08:38:32.338993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:122048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.827 [2024-04-17 08:38:32.339007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.827 [2024-04-17 08:38:32.339019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:122056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.827 [2024-04-17 08:38:32.339030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.827 [2024-04-17 08:38:32.339041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:122600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.827 [2024-04-17 08:38:32.339051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.827 [2024-04-17 08:38:32.339063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:122616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.827 [2024-04-17 08:38:32.339073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.827 [2024-04-17 08:38:32.339086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:122624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.827 [2024-04-17 08:38:32.339096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.827 [2024-04-17 08:38:32.339108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:122632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.827 [2024-04-17 08:38:32.339117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.827 [2024-04-17 08:38:32.339129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:122640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.827 [2024-04-17 08:38:32.339139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.827 [2024-04-17 08:38:32.339151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:122648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.827 [2024-04-17 08:38:32.339160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.827 [2024-04-17 08:38:32.339171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:122696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.827 [2024-04-17 08:38:32.339188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.827 [2024-04-17 08:38:32.339200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:122704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.827 [2024-04-17 08:38:32.339209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.827 [2024-04-17 08:38:32.339221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:122720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.827 [2024-04-17 08:38:32.339231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.827 [2024-04-17 08:38:32.339243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:122728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.827 [2024-04-17 08:38:32.339253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.827 [2024-04-17 08:38:32.339264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:122736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.827 [2024-04-17 08:38:32.339274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.827 [2024-04-17 08:38:32.339285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:122752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.827 [2024-04-17 08:38:32.339295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.827 [2024-04-17 08:38:32.339307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:122760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.827 [2024-04-17 08:38:32.339317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.827 [2024-04-17 08:38:32.339328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:122768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.827 [2024-04-17 08:38:32.339338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.827 [2024-04-17 08:38:32.339349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:122072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.827 [2024-04-17 08:38:32.339361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.339372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:122088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.339382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.339404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:122096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.339415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.339426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:122120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.339436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.339447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:122136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.339457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.339475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:122152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.339485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.339496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:122184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.339506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.339518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:122208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.339528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.339539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:122240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.339549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.339561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:122248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.339572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.339585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:122256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.339595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.339607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:122296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.339617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.339628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:122320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.339638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.339650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:122328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.339660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.339671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.339681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.339693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:122344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.339703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.339714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:122808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.828 [2024-04-17 08:38:32.339725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.339737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:122816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.339753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.339764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:122824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.339774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.339786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:122832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.828 [2024-04-17 08:38:32.339795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.339807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:122840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.339816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.339828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:122848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.828 [2024-04-17 08:38:32.339838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.339849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:122856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.339859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.339870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:122864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.828 [2024-04-17 08:38:32.339880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.339891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:122872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.339901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.339913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:122880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.828 [2024-04-17 08:38:32.339923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.339934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:122888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.339944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.339956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:122896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.339966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.339978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:122904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.339988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.339999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.340009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:122920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.340036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:122928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.828 [2024-04-17 08:38:32.340058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:122936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.828 [2024-04-17 08:38:32.340081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:122944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.828 [2024-04-17 08:38:32.340102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:122952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.340123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:122960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.340144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:122968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.340165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:122976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.828 [2024-04-17 08:38:32.340187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:122984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.340208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:122992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.828 [2024-04-17 08:38:32.340230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:123000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.828 [2024-04-17 08:38:32.340251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.340272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.828 [2024-04-17 08:38:32.340293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:123024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.828 [2024-04-17 08:38:32.340319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:123032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.340341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:123040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.828 [2024-04-17 08:38:32.340364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:123048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.828 [2024-04-17 08:38:32.340385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:123056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.340418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:123064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.340441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:123072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.340462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:123080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.828 [2024-04-17 08:38:32.340484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:123088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.828 [2024-04-17 08:38:32.340505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:123096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.340526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:123104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.828 [2024-04-17 08:38:32.340548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:123112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.340570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:123120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.340596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:123128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.340618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.340639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:123144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.828 [2024-04-17 08:38:32.340661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:123152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.828 [2024-04-17 08:38:32.340682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:122392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.340704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:122408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.340727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:122472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.340750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:122504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.340771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:122512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.340793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:122520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.340815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:122528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.340836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:122552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.340858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:122560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.340884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:122568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.340905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:122576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.340927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:122592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.340948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:122608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.340968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.340980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:122656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.340990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.341001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:122664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.341011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.341022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:122672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.828 [2024-04-17 08:38:32.341031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.341043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:123160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.828 [2024-04-17 08:38:32.341053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.828 [2024-04-17 08:38:32.341064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:123168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.829 [2024-04-17 08:38:32.341075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:32.341087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:123176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.829 [2024-04-17 08:38:32.341096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:32.341108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:123184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:32.341117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:32.341129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:123192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:32.341146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:32.341158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:123200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.829 [2024-04-17 08:38:32.341168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:32.341180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:123208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:32.341189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:32.341201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:123216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:32.341211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:32.341223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:123224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:32.341233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:32.341245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:123232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:32.341255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:32.341267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:123240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:32.341276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:32.341288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:123248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.829 [2024-04-17 08:38:32.341298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:32.341310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:123256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:32.341320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:32.341331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:122680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:32.341341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:32.341352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:122688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:32.341362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:32.341373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:122712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:32.341383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:32.341402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:122744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:32.341413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:32.341429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:122776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:32.341441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:32.341452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:122784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:32.341462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:32.341473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:122792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:32.341483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:32.341494] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f9fc0 is same with the state(5) to be set 00:42:13.829 [2024-04-17 08:38:32.341508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:42:13.829 [2024-04-17 08:38:32.341515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:42:13.829 [2024-04-17 08:38:32.341523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122800 len:8 PRP1 0x0 PRP2 0x0 00:42:13.829 [2024-04-17 08:38:32.341532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:32.341580] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14f9fc0 was disconnected and freed. reset controller. 00:42:13.829 [2024-04-17 08:38:32.341594] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:42:13.829 [2024-04-17 08:38:32.341638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:42:13.829 [2024-04-17 08:38:32.341651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:32.341662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:42:13.829 [2024-04-17 08:38:32.341672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:32.341682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:42:13.829 [2024-04-17 08:38:32.341693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:32.341703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:42:13.829 [2024-04-17 08:38:32.341713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:32.341723] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:13.829 [2024-04-17 08:38:32.344084] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:13.829 [2024-04-17 08:38:32.344115] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148d010 (9): Bad file descriptor 00:42:13.829 [2024-04-17 08:38:32.367960] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:42:13.829 [2024-04-17 08:38:35.871687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:35.871744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:35.871783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:35.871794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:35.871805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:35.871814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:35.871825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:35.871835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:35.871845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:35.871854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:35.871865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:35.871874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:35.871886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:35.871895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:35.871906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:35.871915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:35.871927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:35.871936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:35.871947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:35.871956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:35.871967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:35.871976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:35.871987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:35.871996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:35.872008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:35.872017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:35.872027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:35.872036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:35.872054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:35.872063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:35.872073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:35.872082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:35.872093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:35.872104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:35.872115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:35.872125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:35.872136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:35.872145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:35.872155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:35.872165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:35.872175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:35.872184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:35.872195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:35.872204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:35.872215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:35.872224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:35.872234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:35.872244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:35.872255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:35.872264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:35.872275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:35.872284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:35.872294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:35.872308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:35.872318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:35.872327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:35.872338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:35.872348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:35.872359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:35.872367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:35.872379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:35.872388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:35.872399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:35.872419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:35.872430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:35.872442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:35.872453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:35.872462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:35.872473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:35.872482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:35.872493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:35.872502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:35.872513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:35.872522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:35.872533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:35.872542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:35.872553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:35.872562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.829 [2024-04-17 08:38:35.872578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.829 [2024-04-17 08:38:35.872587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.872598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.872607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.872618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.830 [2024-04-17 08:38:35.872627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.872638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.872647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.872658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.872668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.872678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.872687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.872698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.830 [2024-04-17 08:38:35.872708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.872718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.830 [2024-04-17 08:38:35.872728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.872739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.830 [2024-04-17 08:38:35.872748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.872758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.872769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.872780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.830 [2024-04-17 08:38:35.872789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.872800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.830 [2024-04-17 08:38:35.872809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.872820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.872834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.872845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.872854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.872865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.830 [2024-04-17 08:38:35.872874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.872885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.872894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.872904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.872914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.872925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.872934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.872944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.872954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.872965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.872974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.872984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.872994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.873014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.873034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.830 [2024-04-17 08:38:35.873054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.830 [2024-04-17 08:38:35.873074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.830 [2024-04-17 08:38:35.873102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.873122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.830 [2024-04-17 08:38:35.873143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.873164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.830 [2024-04-17 08:38:35.873184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.873205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.830 [2024-04-17 08:38:35.873225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.830 [2024-04-17 08:38:35.873246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.873265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.873286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.873307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.873327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.873347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.873372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.873403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.873424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.873445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.830 [2024-04-17 08:38:35.873465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.873486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.873506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.830 [2024-04-17 08:38:35.873527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.830 [2024-04-17 08:38:35.873547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.873566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.873586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.830 [2024-04-17 08:38:35.873606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.830 [2024-04-17 08:38:35.873630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.873653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.873673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.830 [2024-04-17 08:38:35.873693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.830 [2024-04-17 08:38:35.873713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.873732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.830 [2024-04-17 08:38:35.873752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.830 [2024-04-17 08:38:35.873774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.873793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.830 [2024-04-17 08:38:35.873813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.830 [2024-04-17 08:38:35.873833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.873853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.830 [2024-04-17 08:38:35.873873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.830 [2024-04-17 08:38:35.873898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.873926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.873964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.830 [2024-04-17 08:38:35.873987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.873998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.830 [2024-04-17 08:38:35.874008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.874020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.874030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.874041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.830 [2024-04-17 08:38:35.874051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.874063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.830 [2024-04-17 08:38:35.874073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.874085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.874095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.874107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.830 [2024-04-17 08:38:35.874116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.874128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.874139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.874151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.874161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.874172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.874182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.874202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.874212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.830 [2024-04-17 08:38:35.874224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.830 [2024-04-17 08:38:35.874234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:35.874245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:35.874255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:35.874267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.831 [2024-04-17 08:38:35.874276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:35.874288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:35.874298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:35.874309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:35.874319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:35.874331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:35.874343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:35.874355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:35.874364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:35.874376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:35.874386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:35.874397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:35.874407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:35.874428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:35.874438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:35.874449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:35.874459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:35.874471] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fbd10 is same with the state(5) to be set 00:42:13.831 [2024-04-17 08:38:35.874488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:42:13.831 [2024-04-17 08:38:35.874495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:42:13.831 [2024-04-17 08:38:35.874505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5872 len:8 PRP1 0x0 PRP2 0x0 00:42:13.831 [2024-04-17 08:38:35.874515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:35.874562] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14fbd10 was disconnected and freed. reset controller. 00:42:13.831 [2024-04-17 08:38:35.874575] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:42:13.831 [2024-04-17 08:38:35.874620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:42:13.831 [2024-04-17 08:38:35.874632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:35.874643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:42:13.831 [2024-04-17 08:38:35.874653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:35.874665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:42:13.831 [2024-04-17 08:38:35.874675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:35.874685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:42:13.831 [2024-04-17 08:38:35.874695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:35.874705] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:13.831 [2024-04-17 08:38:35.874743] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148d010 (9): Bad file descriptor 00:42:13.831 [2024-04-17 08:38:35.876748] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:13.831 [2024-04-17 08:38:35.895736] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:42:13.831 [2024-04-17 08:38:40.371035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:122344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.371159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.371182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:122360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.371193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.371206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:122368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.371216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.371227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:121696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.371238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.371250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:121712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.371278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.371290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.371300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.371311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:121736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.371321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.371333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:121744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.371343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.371355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:121752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.371365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.371376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:121760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.371386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.371409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:121776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.371419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.371431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:122376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.371441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.371453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:122392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.371463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.371474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:122408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.371484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.371495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:122456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.371505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.371517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:122472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.371528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.371540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:122480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.371552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.371570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:122504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.371581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.371592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:122528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.831 [2024-04-17 08:38:40.371602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.371614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:122536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.831 [2024-04-17 08:38:40.371624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.371636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:122544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.371646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.371658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:121784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.371667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.371681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:121792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.371690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.371702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:121808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.371711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.371723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:121824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.371733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.371744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:121848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.371754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.371765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:121864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.371775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.371787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:121904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.371796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.371808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:121920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.371818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.371829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.371839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.371855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:121984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.371865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.371877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:122000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.371887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.371898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:122064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.371908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.371920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:122072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.371930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.371942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:122080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.371952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.371964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:122088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.371974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.371985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:122096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.371995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.372007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:122552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.372016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.372028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:122560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.372038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.372049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:122568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.372059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.372071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:122576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.372080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.372092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:122584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.831 [2024-04-17 08:38:40.372102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.372114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:122592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.831 [2024-04-17 08:38:40.372129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.372141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:122600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.831 [2024-04-17 08:38:40.372150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.372162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:122608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.372171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.372183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:122616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.372193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.372205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:122624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.372214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.372226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:122632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.372236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.372248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.831 [2024-04-17 08:38:40.372258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.372270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:122648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.831 [2024-04-17 08:38:40.372280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.372292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:122656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.372302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.372313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.372323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.372335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:122672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.372345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.372357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:122680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.372368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.372379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:122688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.831 [2024-04-17 08:38:40.372389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.372418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:122696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.831 [2024-04-17 08:38:40.372428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.372441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:122704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.831 [2024-04-17 08:38:40.372451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.831 [2024-04-17 08:38:40.372463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:122712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.372472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.372484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:122720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.832 [2024-04-17 08:38:40.372493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.372505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.372515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.372526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:122736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.832 [2024-04-17 08:38:40.372536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.372547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:122744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.372557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.372569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:122752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.832 [2024-04-17 08:38:40.372579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.372590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:122760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.372600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.372611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:122768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.372623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.372634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:122776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.832 [2024-04-17 08:38:40.372644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.372656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:122784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.372665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.372677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:122792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.372692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.372704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:122800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.372714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.372726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:122808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.372736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.372748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:122816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.832 [2024-04-17 08:38:40.372759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.372771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:122824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.832 [2024-04-17 08:38:40.372780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.372792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:122128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.372802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.372814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:122136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.372824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.372835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:122152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.372845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.372856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.372866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.372878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:122184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.372887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.372898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.372908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.372920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:122200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.372929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.372941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:122208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.372951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.372967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:122224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.372978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.372989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:122240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.372999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:122248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.373020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:122256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.373042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:122264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.373063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:122272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.373085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:122288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.373106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:122296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.373128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:122832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.373149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:122840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.832 [2024-04-17 08:38:40.373170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:122848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.373192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:122856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.832 [2024-04-17 08:38:40.373213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:122864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.373240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:122872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.832 [2024-04-17 08:38:40.373262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:122880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.832 [2024-04-17 08:38:40.373283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:122888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.373305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:122896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.832 [2024-04-17 08:38:40.373326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:122904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.373347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:122912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.373369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:122920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.373390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:122928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.373424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:122936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.832 [2024-04-17 08:38:40.373448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:122944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.373469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:122952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.373491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:122960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.832 [2024-04-17 08:38:40.373512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:122968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.373538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:122976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.373560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:122984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.373582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:122992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.832 [2024-04-17 08:38:40.373603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:123000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.373625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:123008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.373647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:123016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:13.832 [2024-04-17 08:38:40.373668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:122320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.373692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:122328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.373713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:122336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.373735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:122352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.373757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:122384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.373778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:122400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.373806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:122416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.373828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:122424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.373849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:122432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.373870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:122440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.373891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:122448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.373921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:122464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.832 [2024-04-17 08:38:40.373942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.832 [2024-04-17 08:38:40.373955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:122488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.833 [2024-04-17 08:38:40.373965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.833 [2024-04-17 08:38:40.373977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:122496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.833 [2024-04-17 08:38:40.373987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.833 [2024-04-17 08:38:40.373998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:122512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:13.833 [2024-04-17 08:38:40.374008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.833 [2024-04-17 08:38:40.374020] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fbeb0 is same with the state(5) to be set 00:42:13.833 [2024-04-17 08:38:40.374033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:42:13.833 [2024-04-17 08:38:40.374042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:42:13.833 [2024-04-17 08:38:40.374050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122520 len:8 PRP1 0x0 PRP2 0x0 00:42:13.833 [2024-04-17 08:38:40.374060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.833 [2024-04-17 08:38:40.374106] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14fbeb0 was disconnected and freed. reset controller. 00:42:13.833 [2024-04-17 08:38:40.374118] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:42:13.833 [2024-04-17 08:38:40.374166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:42:13.833 [2024-04-17 08:38:40.374185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.833 [2024-04-17 08:38:40.374197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:42:13.833 [2024-04-17 08:38:40.374207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.833 [2024-04-17 08:38:40.374217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:42:13.833 [2024-04-17 08:38:40.374227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.833 [2024-04-17 08:38:40.374238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:42:13.833 [2024-04-17 08:38:40.374248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:13.833 [2024-04-17 08:38:40.374258] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:13.833 [2024-04-17 08:38:40.376498] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:13.833 [2024-04-17 08:38:40.376537] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148d010 (9): Bad file descriptor 00:42:13.833 [2024-04-17 08:38:40.392560] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:42:13.833 00:42:13.833 Latency(us) 00:42:13.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:13.833 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:42:13.833 Verification LBA range: start 0x0 length 0x4000 00:42:13.833 NVMe0n1 : 15.01 13790.45 53.87 214.64 0.00 9123.12 518.71 15224.96 00:42:13.833 =================================================================================================================== 00:42:13.833 Total : 13790.45 53.87 214.64 0.00 9123.12 518.71 15224.96 00:42:13.833 Received shutdown signal, test time was about 15.000000 seconds 00:42:13.833 00:42:13.833 Latency(us) 00:42:13.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:13.833 =================================================================================================================== 00:42:13.833 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:13.833 08:38:46 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:42:13.833 08:38:46 -- host/failover.sh@65 -- # count=3 00:42:13.833 08:38:46 -- host/failover.sh@67 -- # (( count != 3 )) 00:42:13.833 08:38:46 -- host/failover.sh@73 -- # bdevperf_pid=83113 00:42:13.833 08:38:46 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:42:13.833 08:38:46 -- host/failover.sh@75 -- # waitforlisten 83113 /var/tmp/bdevperf.sock 00:42:13.833 08:38:46 -- common/autotest_common.sh@819 -- # '[' -z 83113 ']' 00:42:13.833 08:38:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:42:13.833 08:38:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:42:13.833 08:38:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:42:13.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:42:13.833 08:38:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:42:13.833 08:38:46 -- common/autotest_common.sh@10 -- # set +x 00:42:14.400 08:38:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:42:14.400 08:38:47 -- common/autotest_common.sh@852 -- # return 0 00:42:14.400 08:38:47 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:42:14.400 [2024-04-17 08:38:47.676270] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:42:14.400 08:38:47 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:42:14.969 [2024-04-17 08:38:47.999942] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:42:14.969 08:38:48 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:42:15.228 NVMe0n1 00:42:15.228 08:38:48 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:42:15.486 00:42:15.486 08:38:48 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:42:15.745 00:42:15.745 08:38:49 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:42:15.745 08:38:49 -- host/failover.sh@82 -- # grep -q NVMe0 00:42:16.004 08:38:49 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:42:16.261 08:38:49 -- host/failover.sh@87 -- # sleep 3 00:42:19.567 08:38:52 -- host/failover.sh@88 -- # grep -q NVMe0 00:42:19.567 08:38:52 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:42:19.567 08:38:52 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:42:19.567 08:38:52 -- host/failover.sh@90 -- # run_test_pid=83253 00:42:19.567 08:38:52 -- host/failover.sh@92 -- # wait 83253 00:42:20.947 0 00:42:20.947 08:38:53 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:42:20.947 [2024-04-17 08:38:46.546558] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:42:20.947 [2024-04-17 08:38:46.546716] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83113 ] 00:42:20.947 [2024-04-17 08:38:46.683896] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:20.947 [2024-04-17 08:38:46.789993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:42:20.947 [2024-04-17 08:38:49.499344] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:42:20.947 [2024-04-17 08:38:49.499462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:42:20.947 [2024-04-17 08:38:49.499480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:20.947 [2024-04-17 08:38:49.499494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:42:20.947 [2024-04-17 08:38:49.499504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:20.947 [2024-04-17 08:38:49.499515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:42:20.947 [2024-04-17 08:38:49.499525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:20.947 [2024-04-17 08:38:49.499536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:42:20.947 [2024-04-17 08:38:49.499546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:20.947 [2024-04-17 08:38:49.499556] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:42:20.947 [2024-04-17 08:38:49.499596] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:20.947 [2024-04-17 08:38:49.499618] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x70a010 (9): Bad file descriptor 00:42:20.947 [2024-04-17 08:38:49.504954] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:42:20.947 Running I/O for 1 seconds... 00:42:20.947 00:42:20.947 Latency(us) 00:42:20.947 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:20.947 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:42:20.947 Verification LBA range: start 0x0 length 0x4000 00:42:20.947 NVMe0n1 : 1.01 11718.51 45.78 0.00 0.00 10877.19 1001.64 26901.24 00:42:20.947 =================================================================================================================== 00:42:20.947 Total : 11718.51 45.78 0.00 0.00 10877.19 1001.64 26901.24 00:42:20.947 08:38:53 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:42:20.947 08:38:53 -- host/failover.sh@95 -- # grep -q NVMe0 00:42:20.947 08:38:54 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:42:21.207 08:38:54 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:42:21.207 08:38:54 -- host/failover.sh@99 -- # grep -q NVMe0 00:42:21.466 08:38:54 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:42:21.466 08:38:54 -- host/failover.sh@101 -- # sleep 3 00:42:24.759 08:38:57 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:42:24.759 08:38:57 -- host/failover.sh@103 -- # grep -q NVMe0 00:42:24.759 08:38:57 -- host/failover.sh@108 -- # killprocess 83113 00:42:24.759 08:38:57 -- common/autotest_common.sh@926 -- # '[' -z 83113 ']' 00:42:24.759 08:38:57 -- common/autotest_common.sh@930 -- # kill -0 83113 00:42:24.759 08:38:57 -- common/autotest_common.sh@931 -- # uname 00:42:24.759 08:38:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:42:24.759 08:38:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83113 00:42:24.759 killing process with pid 83113 00:42:24.759 08:38:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:42:24.759 08:38:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:42:24.759 08:38:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83113' 00:42:24.759 08:38:58 -- common/autotest_common.sh@945 -- # kill 83113 00:42:24.759 08:38:58 -- common/autotest_common.sh@950 -- # wait 83113 00:42:25.018 08:38:58 -- host/failover.sh@110 -- # sync 00:42:25.018 08:38:58 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:25.278 08:38:58 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:42:25.278 08:38:58 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:42:25.278 08:38:58 -- host/failover.sh@116 -- # nvmftestfini 00:42:25.278 08:38:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:42:25.278 08:38:58 -- nvmf/common.sh@116 -- # sync 00:42:25.278 08:38:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:42:25.278 08:38:58 -- nvmf/common.sh@119 -- # set +e 00:42:25.278 08:38:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:42:25.278 08:38:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:42:25.278 rmmod nvme_tcp 00:42:25.278 rmmod nvme_fabrics 00:42:25.278 rmmod nvme_keyring 00:42:25.278 08:38:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:42:25.278 08:38:58 -- nvmf/common.sh@123 -- # set -e 00:42:25.278 08:38:58 -- nvmf/common.sh@124 -- # return 0 00:42:25.278 08:38:58 -- nvmf/common.sh@477 -- # '[' -n 82750 ']' 00:42:25.278 08:38:58 -- nvmf/common.sh@478 -- # killprocess 82750 00:42:25.278 08:38:58 -- common/autotest_common.sh@926 -- # '[' -z 82750 ']' 00:42:25.278 08:38:58 -- common/autotest_common.sh@930 -- # kill -0 82750 00:42:25.278 08:38:58 -- common/autotest_common.sh@931 -- # uname 00:42:25.278 08:38:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:42:25.278 08:38:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82750 00:42:25.278 killing process with pid 82750 00:42:25.278 08:38:58 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:42:25.278 08:38:58 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:42:25.278 08:38:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82750' 00:42:25.278 08:38:58 -- common/autotest_common.sh@945 -- # kill 82750 00:42:25.278 08:38:58 -- common/autotest_common.sh@950 -- # wait 82750 00:42:25.537 08:38:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:42:25.537 08:38:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:42:25.537 08:38:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:42:25.537 08:38:58 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:42:25.537 08:38:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:42:25.537 08:38:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:25.537 08:38:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:25.537 08:38:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:25.537 08:38:58 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:42:25.537 ************************************ 00:42:25.537 END TEST nvmf_failover 00:42:25.537 ************************************ 00:42:25.537 00:42:25.537 real 0m32.319s 00:42:25.537 user 2m5.776s 00:42:25.537 sys 0m4.059s 00:42:25.537 08:38:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:25.537 08:38:58 -- common/autotest_common.sh@10 -- # set +x 00:42:25.797 08:38:58 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:42:25.797 08:38:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:42:25.797 08:38:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:42:25.797 08:38:58 -- common/autotest_common.sh@10 -- # set +x 00:42:25.797 ************************************ 00:42:25.797 START TEST nvmf_discovery 00:42:25.797 ************************************ 00:42:25.797 08:38:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:42:25.797 * Looking for test storage... 00:42:25.797 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:42:25.797 08:38:59 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:42:25.797 08:38:59 -- nvmf/common.sh@7 -- # uname -s 00:42:25.797 08:38:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:25.797 08:38:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:25.797 08:38:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:25.797 08:38:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:25.797 08:38:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:25.797 08:38:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:25.797 08:38:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:25.797 08:38:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:25.797 08:38:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:25.797 08:38:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:25.797 08:38:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:42:25.797 08:38:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:42:25.797 08:38:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:25.797 08:38:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:25.797 08:38:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:42:25.797 08:38:59 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:25.797 08:38:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:25.797 08:38:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:25.797 08:38:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:25.797 08:38:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:25.797 08:38:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:25.797 08:38:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:25.797 08:38:59 -- paths/export.sh@5 -- # export PATH 00:42:25.798 08:38:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:25.798 08:38:59 -- nvmf/common.sh@46 -- # : 0 00:42:25.798 08:38:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:42:25.798 08:38:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:42:25.798 08:38:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:42:25.798 08:38:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:25.798 08:38:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:25.798 08:38:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:42:25.798 08:38:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:42:25.798 08:38:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:42:25.798 08:38:59 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:42:25.798 08:38:59 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:42:25.798 08:38:59 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:42:25.798 08:38:59 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:42:25.798 08:38:59 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:42:25.798 08:38:59 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:42:25.798 08:38:59 -- host/discovery.sh@25 -- # nvmftestinit 00:42:25.798 08:38:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:42:25.798 08:38:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:25.798 08:38:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:42:25.798 08:38:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:42:25.798 08:38:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:42:25.798 08:38:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:25.798 08:38:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:25.798 08:38:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:25.798 08:38:59 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:42:25.798 08:38:59 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:42:25.798 08:38:59 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:42:25.798 08:38:59 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:42:25.798 08:38:59 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:42:25.798 08:38:59 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:42:25.798 08:38:59 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:25.798 08:38:59 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:25.798 08:38:59 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:42:25.798 08:38:59 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:42:25.798 08:38:59 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:42:25.798 08:38:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:42:25.798 08:38:59 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:42:25.798 08:38:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:25.798 08:38:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:42:25.798 08:38:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:42:25.798 08:38:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:42:25.798 08:38:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:42:25.798 08:38:59 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:42:25.798 08:38:59 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:42:26.057 Cannot find device "nvmf_tgt_br" 00:42:26.057 08:38:59 -- nvmf/common.sh@154 -- # true 00:42:26.057 08:38:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:42:26.057 Cannot find device "nvmf_tgt_br2" 00:42:26.057 08:38:59 -- nvmf/common.sh@155 -- # true 00:42:26.057 08:38:59 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:42:26.057 08:38:59 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:42:26.057 Cannot find device "nvmf_tgt_br" 00:42:26.057 08:38:59 -- nvmf/common.sh@157 -- # true 00:42:26.057 08:38:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:42:26.057 Cannot find device "nvmf_tgt_br2" 00:42:26.057 08:38:59 -- nvmf/common.sh@158 -- # true 00:42:26.057 08:38:59 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:42:26.058 08:38:59 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:42:26.058 08:38:59 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:42:26.058 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:42:26.058 08:38:59 -- nvmf/common.sh@161 -- # true 00:42:26.058 08:38:59 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:42:26.058 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:42:26.058 08:38:59 -- nvmf/common.sh@162 -- # true 00:42:26.058 08:38:59 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:42:26.058 08:38:59 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:42:26.058 08:38:59 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:42:26.058 08:38:59 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:42:26.058 08:38:59 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:42:26.058 08:38:59 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:42:26.058 08:38:59 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:42:26.058 08:38:59 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:42:26.058 08:38:59 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:42:26.058 08:38:59 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:42:26.058 08:38:59 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:42:26.058 08:38:59 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:42:26.058 08:38:59 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:42:26.058 08:38:59 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:42:26.058 08:38:59 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:42:26.058 08:38:59 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:42:26.058 08:38:59 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:42:26.058 08:38:59 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:42:26.058 08:38:59 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:42:26.058 08:38:59 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:42:26.058 08:38:59 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:42:26.317 08:38:59 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:42:26.317 08:38:59 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:42:26.317 08:38:59 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:42:26.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:26.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:42:26.317 00:42:26.317 --- 10.0.0.2 ping statistics --- 00:42:26.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:26.317 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:42:26.317 08:38:59 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:42:26.317 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:42:26.317 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:42:26.317 00:42:26.317 --- 10.0.0.3 ping statistics --- 00:42:26.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:26.317 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:42:26.317 08:38:59 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:42:26.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:26.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:42:26.317 00:42:26.317 --- 10.0.0.1 ping statistics --- 00:42:26.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:26.317 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:42:26.317 08:38:59 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:26.317 08:38:59 -- nvmf/common.sh@421 -- # return 0 00:42:26.317 08:38:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:42:26.317 08:38:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:26.317 08:38:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:42:26.317 08:38:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:42:26.317 08:38:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:26.317 08:38:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:42:26.317 08:38:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:42:26.317 08:38:59 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:42:26.317 08:38:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:42:26.317 08:38:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:42:26.317 08:38:59 -- common/autotest_common.sh@10 -- # set +x 00:42:26.317 08:38:59 -- nvmf/common.sh@469 -- # nvmfpid=83543 00:42:26.317 08:38:59 -- nvmf/common.sh@470 -- # waitforlisten 83543 00:42:26.317 08:38:59 -- common/autotest_common.sh@819 -- # '[' -z 83543 ']' 00:42:26.317 08:38:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:26.317 08:38:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:42:26.317 08:38:59 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:42:26.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:26.317 08:38:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:26.317 08:38:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:42:26.317 08:38:59 -- common/autotest_common.sh@10 -- # set +x 00:42:26.317 [2024-04-17 08:38:59.481649] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:42:26.317 [2024-04-17 08:38:59.481727] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:26.317 [2024-04-17 08:38:59.621904] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:26.576 [2024-04-17 08:38:59.723968] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:42:26.576 [2024-04-17 08:38:59.724086] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:26.576 [2024-04-17 08:38:59.724094] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:26.576 [2024-04-17 08:38:59.724100] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:26.576 [2024-04-17 08:38:59.724118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:42:27.144 08:39:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:42:27.144 08:39:00 -- common/autotest_common.sh@852 -- # return 0 00:42:27.144 08:39:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:42:27.144 08:39:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:42:27.144 08:39:00 -- common/autotest_common.sh@10 -- # set +x 00:42:27.144 08:39:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:27.144 08:39:00 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:27.144 08:39:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:27.144 08:39:00 -- common/autotest_common.sh@10 -- # set +x 00:42:27.144 [2024-04-17 08:39:00.449777] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:27.144 08:39:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:27.144 08:39:00 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:42:27.144 08:39:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:27.144 08:39:00 -- common/autotest_common.sh@10 -- # set +x 00:42:27.144 [2024-04-17 08:39:00.461855] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:42:27.144 08:39:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:27.144 08:39:00 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:42:27.144 08:39:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:27.144 08:39:00 -- common/autotest_common.sh@10 -- # set +x 00:42:27.404 null0 00:42:27.404 08:39:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:27.404 08:39:00 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:42:27.404 08:39:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:27.404 08:39:00 -- common/autotest_common.sh@10 -- # set +x 00:42:27.404 null1 00:42:27.404 08:39:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:27.404 08:39:00 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:42:27.404 08:39:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:27.404 08:39:00 -- common/autotest_common.sh@10 -- # set +x 00:42:27.404 08:39:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:27.404 08:39:00 -- host/discovery.sh@45 -- # hostpid=83602 00:42:27.404 08:39:00 -- host/discovery.sh@46 -- # waitforlisten 83602 /tmp/host.sock 00:42:27.404 08:39:00 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:42:27.404 08:39:00 -- common/autotest_common.sh@819 -- # '[' -z 83602 ']' 00:42:27.404 08:39:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:42:27.404 08:39:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:42:27.404 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:42:27.404 08:39:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:42:27.404 08:39:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:42:27.404 08:39:00 -- common/autotest_common.sh@10 -- # set +x 00:42:27.404 [2024-04-17 08:39:00.547948] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:42:27.404 [2024-04-17 08:39:00.548016] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83602 ] 00:42:27.404 [2024-04-17 08:39:00.688103] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:27.663 [2024-04-17 08:39:00.791731] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:42:27.663 [2024-04-17 08:39:00.791871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:42:28.232 08:39:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:42:28.232 08:39:01 -- common/autotest_common.sh@852 -- # return 0 00:42:28.232 08:39:01 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:42:28.232 08:39:01 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:42:28.232 08:39:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:28.232 08:39:01 -- common/autotest_common.sh@10 -- # set +x 00:42:28.232 08:39:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:28.232 08:39:01 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:42:28.232 08:39:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:28.232 08:39:01 -- common/autotest_common.sh@10 -- # set +x 00:42:28.232 08:39:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:28.232 08:39:01 -- host/discovery.sh@72 -- # notify_id=0 00:42:28.232 08:39:01 -- host/discovery.sh@78 -- # get_subsystem_names 00:42:28.232 08:39:01 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:42:28.232 08:39:01 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:42:28.232 08:39:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:28.232 08:39:01 -- host/discovery.sh@59 -- # xargs 00:42:28.232 08:39:01 -- common/autotest_common.sh@10 -- # set +x 00:42:28.232 08:39:01 -- host/discovery.sh@59 -- # sort 00:42:28.232 08:39:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:28.232 08:39:01 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:42:28.232 08:39:01 -- host/discovery.sh@79 -- # get_bdev_list 00:42:28.232 08:39:01 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:42:28.232 08:39:01 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:42:28.232 08:39:01 -- host/discovery.sh@55 -- # sort 00:42:28.232 08:39:01 -- host/discovery.sh@55 -- # xargs 00:42:28.232 08:39:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:28.232 08:39:01 -- common/autotest_common.sh@10 -- # set +x 00:42:28.232 08:39:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:28.492 08:39:01 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:42:28.492 08:39:01 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:42:28.492 08:39:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:28.492 08:39:01 -- common/autotest_common.sh@10 -- # set +x 00:42:28.492 08:39:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:28.492 08:39:01 -- host/discovery.sh@82 -- # get_subsystem_names 00:42:28.492 08:39:01 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:42:28.492 08:39:01 -- host/discovery.sh@59 -- # sort 00:42:28.492 08:39:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:28.492 08:39:01 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:42:28.492 08:39:01 -- common/autotest_common.sh@10 -- # set +x 00:42:28.492 08:39:01 -- host/discovery.sh@59 -- # xargs 00:42:28.493 08:39:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:28.493 08:39:01 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:42:28.493 08:39:01 -- host/discovery.sh@83 -- # get_bdev_list 00:42:28.493 08:39:01 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:42:28.493 08:39:01 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:42:28.493 08:39:01 -- host/discovery.sh@55 -- # sort 00:42:28.493 08:39:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:28.493 08:39:01 -- host/discovery.sh@55 -- # xargs 00:42:28.493 08:39:01 -- common/autotest_common.sh@10 -- # set +x 00:42:28.493 08:39:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:28.493 08:39:01 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:42:28.493 08:39:01 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:42:28.493 08:39:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:28.493 08:39:01 -- common/autotest_common.sh@10 -- # set +x 00:42:28.493 08:39:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:28.493 08:39:01 -- host/discovery.sh@86 -- # get_subsystem_names 00:42:28.493 08:39:01 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:42:28.493 08:39:01 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:42:28.493 08:39:01 -- host/discovery.sh@59 -- # xargs 00:42:28.493 08:39:01 -- host/discovery.sh@59 -- # sort 00:42:28.493 08:39:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:28.493 08:39:01 -- common/autotest_common.sh@10 -- # set +x 00:42:28.493 08:39:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:28.493 08:39:01 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:42:28.493 08:39:01 -- host/discovery.sh@87 -- # get_bdev_list 00:42:28.493 08:39:01 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:42:28.493 08:39:01 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:42:28.493 08:39:01 -- host/discovery.sh@55 -- # xargs 00:42:28.493 08:39:01 -- host/discovery.sh@55 -- # sort 00:42:28.493 08:39:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:28.493 08:39:01 -- common/autotest_common.sh@10 -- # set +x 00:42:28.493 08:39:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:28.777 08:39:01 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:42:28.777 08:39:01 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:28.777 08:39:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:28.777 08:39:01 -- common/autotest_common.sh@10 -- # set +x 00:42:28.777 [2024-04-17 08:39:01.835620] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:28.777 08:39:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:28.777 08:39:01 -- host/discovery.sh@92 -- # get_subsystem_names 00:42:28.777 08:39:01 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:42:28.777 08:39:01 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:42:28.777 08:39:01 -- host/discovery.sh@59 -- # sort 00:42:28.777 08:39:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:28.777 08:39:01 -- common/autotest_common.sh@10 -- # set +x 00:42:28.777 08:39:01 -- host/discovery.sh@59 -- # xargs 00:42:28.777 08:39:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:28.777 08:39:01 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:42:28.777 08:39:01 -- host/discovery.sh@93 -- # get_bdev_list 00:42:28.777 08:39:01 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:42:28.777 08:39:01 -- host/discovery.sh@55 -- # sort 00:42:28.777 08:39:01 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:42:28.777 08:39:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:28.777 08:39:01 -- common/autotest_common.sh@10 -- # set +x 00:42:28.777 08:39:01 -- host/discovery.sh@55 -- # xargs 00:42:28.777 08:39:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:28.777 08:39:01 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:42:28.777 08:39:01 -- host/discovery.sh@94 -- # get_notification_count 00:42:28.777 08:39:01 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:42:28.777 08:39:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:28.777 08:39:01 -- common/autotest_common.sh@10 -- # set +x 00:42:28.777 08:39:01 -- host/discovery.sh@74 -- # jq '. | length' 00:42:28.777 08:39:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:28.777 08:39:01 -- host/discovery.sh@74 -- # notification_count=0 00:42:28.777 08:39:01 -- host/discovery.sh@75 -- # notify_id=0 00:42:28.777 08:39:01 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:42:28.777 08:39:01 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:42:28.777 08:39:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:28.777 08:39:01 -- common/autotest_common.sh@10 -- # set +x 00:42:28.777 08:39:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:28.777 08:39:01 -- host/discovery.sh@100 -- # sleep 1 00:42:29.359 [2024-04-17 08:39:02.468575] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:42:29.359 [2024-04-17 08:39:02.468615] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:42:29.359 [2024-04-17 08:39:02.468629] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:42:29.359 [2024-04-17 08:39:02.554537] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:42:29.359 [2024-04-17 08:39:02.610182] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:42:29.359 [2024-04-17 08:39:02.610228] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:42:29.928 08:39:02 -- host/discovery.sh@101 -- # get_subsystem_names 00:42:29.928 08:39:02 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:42:29.928 08:39:02 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:42:29.928 08:39:02 -- host/discovery.sh@59 -- # xargs 00:42:29.928 08:39:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:29.928 08:39:02 -- common/autotest_common.sh@10 -- # set +x 00:42:29.928 08:39:02 -- host/discovery.sh@59 -- # sort 00:42:29.928 08:39:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:29.928 08:39:03 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:29.928 08:39:03 -- host/discovery.sh@102 -- # get_bdev_list 00:42:29.928 08:39:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:42:29.928 08:39:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:29.928 08:39:03 -- common/autotest_common.sh@10 -- # set +x 00:42:29.928 08:39:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:42:29.928 08:39:03 -- host/discovery.sh@55 -- # sort 00:42:29.928 08:39:03 -- host/discovery.sh@55 -- # xargs 00:42:29.928 08:39:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:29.928 08:39:03 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:42:29.928 08:39:03 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:42:29.928 08:39:03 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:42:29.928 08:39:03 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:42:29.928 08:39:03 -- host/discovery.sh@63 -- # sort -n 00:42:29.928 08:39:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:29.928 08:39:03 -- host/discovery.sh@63 -- # xargs 00:42:29.928 08:39:03 -- common/autotest_common.sh@10 -- # set +x 00:42:29.928 08:39:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:29.928 08:39:03 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:42:29.928 08:39:03 -- host/discovery.sh@104 -- # get_notification_count 00:42:29.928 08:39:03 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:42:29.928 08:39:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:29.928 08:39:03 -- host/discovery.sh@74 -- # jq '. | length' 00:42:29.928 08:39:03 -- common/autotest_common.sh@10 -- # set +x 00:42:29.928 08:39:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:29.928 08:39:03 -- host/discovery.sh@74 -- # notification_count=1 00:42:29.928 08:39:03 -- host/discovery.sh@75 -- # notify_id=1 00:42:29.928 08:39:03 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:42:29.928 08:39:03 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:42:29.928 08:39:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:29.928 08:39:03 -- common/autotest_common.sh@10 -- # set +x 00:42:29.928 08:39:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:29.928 08:39:03 -- host/discovery.sh@109 -- # sleep 1 00:42:31.308 08:39:04 -- host/discovery.sh@110 -- # get_bdev_list 00:42:31.308 08:39:04 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:42:31.308 08:39:04 -- host/discovery.sh@55 -- # xargs 00:42:31.308 08:39:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:31.308 08:39:04 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:42:31.308 08:39:04 -- common/autotest_common.sh@10 -- # set +x 00:42:31.308 08:39:04 -- host/discovery.sh@55 -- # sort 00:42:31.308 08:39:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:31.308 08:39:04 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:42:31.308 08:39:04 -- host/discovery.sh@111 -- # get_notification_count 00:42:31.308 08:39:04 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:42:31.308 08:39:04 -- host/discovery.sh@74 -- # jq '. | length' 00:42:31.308 08:39:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:31.308 08:39:04 -- common/autotest_common.sh@10 -- # set +x 00:42:31.308 08:39:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:31.308 08:39:04 -- host/discovery.sh@74 -- # notification_count=1 00:42:31.308 08:39:04 -- host/discovery.sh@75 -- # notify_id=2 00:42:31.308 08:39:04 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:42:31.308 08:39:04 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:42:31.308 08:39:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:31.308 08:39:04 -- common/autotest_common.sh@10 -- # set +x 00:42:31.308 [2024-04-17 08:39:04.320038] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:42:31.308 [2024-04-17 08:39:04.320798] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:42:31.308 [2024-04-17 08:39:04.320825] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:42:31.308 08:39:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:31.308 08:39:04 -- host/discovery.sh@117 -- # sleep 1 00:42:31.308 [2024-04-17 08:39:04.406701] bdev_nvme.c:6677:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:42:31.308 [2024-04-17 08:39:04.467868] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:42:31.308 [2024-04-17 08:39:04.467910] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:42:31.308 [2024-04-17 08:39:04.467916] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:42:32.246 08:39:05 -- host/discovery.sh@118 -- # get_subsystem_names 00:42:32.246 08:39:05 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:42:32.246 08:39:05 -- host/discovery.sh@59 -- # sort 00:42:32.246 08:39:05 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:42:32.246 08:39:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:32.246 08:39:05 -- common/autotest_common.sh@10 -- # set +x 00:42:32.246 08:39:05 -- host/discovery.sh@59 -- # xargs 00:42:32.246 08:39:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:32.246 08:39:05 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:32.246 08:39:05 -- host/discovery.sh@119 -- # get_bdev_list 00:42:32.246 08:39:05 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:42:32.246 08:39:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:32.246 08:39:05 -- common/autotest_common.sh@10 -- # set +x 00:42:32.246 08:39:05 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:42:32.246 08:39:05 -- host/discovery.sh@55 -- # sort 00:42:32.246 08:39:05 -- host/discovery.sh@55 -- # xargs 00:42:32.246 08:39:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:32.246 08:39:05 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:42:32.246 08:39:05 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:42:32.246 08:39:05 -- host/discovery.sh@63 -- # sort -n 00:42:32.246 08:39:05 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:42:32.246 08:39:05 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:42:32.246 08:39:05 -- host/discovery.sh@63 -- # xargs 00:42:32.246 08:39:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:32.246 08:39:05 -- common/autotest_common.sh@10 -- # set +x 00:42:32.246 08:39:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:32.246 08:39:05 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:42:32.246 08:39:05 -- host/discovery.sh@121 -- # get_notification_count 00:42:32.246 08:39:05 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:42:32.246 08:39:05 -- host/discovery.sh@74 -- # jq '. | length' 00:42:32.246 08:39:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:32.246 08:39:05 -- common/autotest_common.sh@10 -- # set +x 00:42:32.246 08:39:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:32.246 08:39:05 -- host/discovery.sh@74 -- # notification_count=0 00:42:32.246 08:39:05 -- host/discovery.sh@75 -- # notify_id=2 00:42:32.246 08:39:05 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:42:32.246 08:39:05 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:32.246 08:39:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:32.246 08:39:05 -- common/autotest_common.sh@10 -- # set +x 00:42:32.246 [2024-04-17 08:39:05.547012] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:42:32.246 [2024-04-17 08:39:05.547048] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:42:32.246 08:39:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:32.246 08:39:05 -- host/discovery.sh@127 -- # sleep 1 00:42:32.246 [2024-04-17 08:39:05.552378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:42:32.246 [2024-04-17 08:39:05.552414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:32.246 [2024-04-17 08:39:05.552424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:42:32.246 [2024-04-17 08:39:05.552431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:32.246 [2024-04-17 08:39:05.552438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:42:32.246 [2024-04-17 08:39:05.552444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:32.246 [2024-04-17 08:39:05.552451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:42:32.246 [2024-04-17 08:39:05.552457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:32.246 [2024-04-17 08:39:05.552463] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf78bd0 is same with the state(5) to be set 00:42:32.246 [2024-04-17 08:39:05.562322] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf78bd0 (9): Bad file descriptor 00:42:32.246 [2024-04-17 08:39:05.572321] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:42:32.246 [2024-04-17 08:39:05.572433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:32.246 [2024-04-17 08:39:05.572461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:32.246 [2024-04-17 08:39:05.572471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf78bd0 with addr=10.0.0.2, port=4420 00:42:32.246 [2024-04-17 08:39:05.572478] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf78bd0 is same with the state(5) to be set 00:42:32.246 [2024-04-17 08:39:05.572489] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf78bd0 (9): Bad file descriptor 00:42:32.246 [2024-04-17 08:39:05.572498] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:32.246 [2024-04-17 08:39:05.572504] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:42:32.246 [2024-04-17 08:39:05.572511] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:32.246 [2024-04-17 08:39:05.572521] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:32.506 [2024-04-17 08:39:05.582349] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:42:32.506 [2024-04-17 08:39:05.582421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:32.506 [2024-04-17 08:39:05.582448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:32.506 [2024-04-17 08:39:05.582458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf78bd0 with addr=10.0.0.2, port=4420 00:42:32.506 [2024-04-17 08:39:05.582464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf78bd0 is same with the state(5) to be set 00:42:32.506 [2024-04-17 08:39:05.582474] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf78bd0 (9): Bad file descriptor 00:42:32.506 [2024-04-17 08:39:05.582484] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:32.506 [2024-04-17 08:39:05.582490] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:42:32.506 [2024-04-17 08:39:05.582496] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:32.506 [2024-04-17 08:39:05.582505] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:32.506 [2024-04-17 08:39:05.592369] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:42:32.506 [2024-04-17 08:39:05.592442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:32.506 [2024-04-17 08:39:05.592467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:32.506 [2024-04-17 08:39:05.592475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf78bd0 with addr=10.0.0.2, port=4420 00:42:32.506 [2024-04-17 08:39:05.592481] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf78bd0 is same with the state(5) to be set 00:42:32.506 [2024-04-17 08:39:05.592490] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf78bd0 (9): Bad file descriptor 00:42:32.506 [2024-04-17 08:39:05.592499] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:32.506 [2024-04-17 08:39:05.592504] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:42:32.506 [2024-04-17 08:39:05.592509] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:32.506 [2024-04-17 08:39:05.592518] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:32.506 [2024-04-17 08:39:05.602401] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:42:32.506 [2024-04-17 08:39:05.602468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:32.506 [2024-04-17 08:39:05.602493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:32.506 [2024-04-17 08:39:05.602501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf78bd0 with addr=10.0.0.2, port=4420 00:42:32.506 [2024-04-17 08:39:05.602508] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf78bd0 is same with the state(5) to be set 00:42:32.506 [2024-04-17 08:39:05.602520] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf78bd0 (9): Bad file descriptor 00:42:32.506 [2024-04-17 08:39:05.602529] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:32.506 [2024-04-17 08:39:05.602534] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:42:32.506 [2024-04-17 08:39:05.602540] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:32.506 [2024-04-17 08:39:05.602550] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:32.506 [2024-04-17 08:39:05.612430] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:42:32.506 [2024-04-17 08:39:05.612509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:32.506 [2024-04-17 08:39:05.612537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:32.506 [2024-04-17 08:39:05.612546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf78bd0 with addr=10.0.0.2, port=4420 00:42:32.506 [2024-04-17 08:39:05.612553] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf78bd0 is same with the state(5) to be set 00:42:32.506 [2024-04-17 08:39:05.612564] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf78bd0 (9): Bad file descriptor 00:42:32.506 [2024-04-17 08:39:05.612574] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:32.506 [2024-04-17 08:39:05.612580] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:42:32.506 [2024-04-17 08:39:05.612586] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:32.506 [2024-04-17 08:39:05.612596] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:32.506 [2024-04-17 08:39:05.622461] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:42:32.506 [2024-04-17 08:39:05.622566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:32.506 [2024-04-17 08:39:05.622596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:32.506 [2024-04-17 08:39:05.622605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf78bd0 with addr=10.0.0.2, port=4420 00:42:32.506 [2024-04-17 08:39:05.622612] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf78bd0 is same with the state(5) to be set 00:42:32.506 [2024-04-17 08:39:05.622625] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf78bd0 (9): Bad file descriptor 00:42:32.506 [2024-04-17 08:39:05.622634] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:32.506 [2024-04-17 08:39:05.622640] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:42:32.506 [2024-04-17 08:39:05.622646] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:32.506 [2024-04-17 08:39:05.622656] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:32.506 [2024-04-17 08:39:05.632495] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:42:32.506 [2024-04-17 08:39:05.632548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:32.506 [2024-04-17 08:39:05.632573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:32.506 [2024-04-17 08:39:05.632582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf78bd0 with addr=10.0.0.2, port=4420 00:42:32.506 [2024-04-17 08:39:05.632588] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf78bd0 is same with the state(5) to be set 00:42:32.506 [2024-04-17 08:39:05.632598] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf78bd0 (9): Bad file descriptor 00:42:32.506 [2024-04-17 08:39:05.632607] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:32.506 [2024-04-17 08:39:05.632613] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:42:32.506 [2024-04-17 08:39:05.632619] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:32.506 [2024-04-17 08:39:05.632628] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:32.506 [2024-04-17 08:39:05.632964] bdev_nvme.c:6540:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:42:32.506 [2024-04-17 08:39:05.632980] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:42:33.445 08:39:06 -- host/discovery.sh@128 -- # get_subsystem_names 00:42:33.445 08:39:06 -- host/discovery.sh@59 -- # sort 00:42:33.445 08:39:06 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:42:33.445 08:39:06 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:42:33.445 08:39:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:33.445 08:39:06 -- common/autotest_common.sh@10 -- # set +x 00:42:33.445 08:39:06 -- host/discovery.sh@59 -- # xargs 00:42:33.445 08:39:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:33.445 08:39:06 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:33.445 08:39:06 -- host/discovery.sh@129 -- # get_bdev_list 00:42:33.445 08:39:06 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:42:33.445 08:39:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:33.445 08:39:06 -- common/autotest_common.sh@10 -- # set +x 00:42:33.445 08:39:06 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:42:33.445 08:39:06 -- host/discovery.sh@55 -- # sort 00:42:33.445 08:39:06 -- host/discovery.sh@55 -- # xargs 00:42:33.445 08:39:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:33.445 08:39:06 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:42:33.445 08:39:06 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:42:33.445 08:39:06 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:42:33.445 08:39:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:33.445 08:39:06 -- common/autotest_common.sh@10 -- # set +x 00:42:33.445 08:39:06 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:42:33.445 08:39:06 -- host/discovery.sh@63 -- # sort -n 00:42:33.445 08:39:06 -- host/discovery.sh@63 -- # xargs 00:42:33.445 08:39:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:33.445 08:39:06 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:42:33.445 08:39:06 -- host/discovery.sh@131 -- # get_notification_count 00:42:33.445 08:39:06 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:42:33.445 08:39:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:33.445 08:39:06 -- host/discovery.sh@74 -- # jq '. | length' 00:42:33.445 08:39:06 -- common/autotest_common.sh@10 -- # set +x 00:42:33.445 08:39:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:33.445 08:39:06 -- host/discovery.sh@74 -- # notification_count=0 00:42:33.445 08:39:06 -- host/discovery.sh@75 -- # notify_id=2 00:42:33.445 08:39:06 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:42:33.445 08:39:06 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:42:33.445 08:39:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:33.445 08:39:06 -- common/autotest_common.sh@10 -- # set +x 00:42:33.445 08:39:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:33.445 08:39:06 -- host/discovery.sh@135 -- # sleep 1 00:42:34.826 08:39:07 -- host/discovery.sh@136 -- # get_subsystem_names 00:42:34.826 08:39:07 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:42:34.826 08:39:07 -- host/discovery.sh@59 -- # sort 00:42:34.827 08:39:07 -- host/discovery.sh@59 -- # xargs 00:42:34.827 08:39:07 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:42:34.827 08:39:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:34.827 08:39:07 -- common/autotest_common.sh@10 -- # set +x 00:42:34.827 08:39:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:34.827 08:39:07 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:42:34.827 08:39:07 -- host/discovery.sh@137 -- # get_bdev_list 00:42:34.827 08:39:07 -- host/discovery.sh@55 -- # xargs 00:42:34.827 08:39:07 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:42:34.827 08:39:07 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:42:34.827 08:39:07 -- host/discovery.sh@55 -- # sort 00:42:34.827 08:39:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:34.827 08:39:07 -- common/autotest_common.sh@10 -- # set +x 00:42:34.827 08:39:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:34.827 08:39:07 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:42:34.827 08:39:07 -- host/discovery.sh@138 -- # get_notification_count 00:42:34.827 08:39:07 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:42:34.827 08:39:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:34.827 08:39:07 -- common/autotest_common.sh@10 -- # set +x 00:42:34.827 08:39:07 -- host/discovery.sh@74 -- # jq '. | length' 00:42:34.827 08:39:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:34.827 08:39:07 -- host/discovery.sh@74 -- # notification_count=2 00:42:34.827 08:39:07 -- host/discovery.sh@75 -- # notify_id=4 00:42:34.827 08:39:07 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:42:34.827 08:39:07 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:42:34.827 08:39:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:34.827 08:39:07 -- common/autotest_common.sh@10 -- # set +x 00:42:35.765 [2024-04-17 08:39:08.946852] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:42:35.765 [2024-04-17 08:39:08.946881] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:42:35.765 [2024-04-17 08:39:08.946894] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:42:35.765 [2024-04-17 08:39:09.032790] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:42:35.765 [2024-04-17 08:39:09.092086] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:42:35.765 [2024-04-17 08:39:09.092166] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:42:35.765 08:39:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:35.765 08:39:09 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:42:35.765 08:39:09 -- common/autotest_common.sh@640 -- # local es=0 00:42:35.765 08:39:09 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:42:35.765 08:39:09 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:42:36.024 08:39:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:42:36.024 08:39:09 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:42:36.024 08:39:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:42:36.024 08:39:09 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:42:36.024 08:39:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:36.024 08:39:09 -- common/autotest_common.sh@10 -- # set +x 00:42:36.024 2024/04/17 08:39:09 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:42:36.024 request: 00:42:36.024 { 00:42:36.024 "method": "bdev_nvme_start_discovery", 00:42:36.024 "params": { 00:42:36.024 "name": "nvme", 00:42:36.024 "trtype": "tcp", 00:42:36.024 "traddr": "10.0.0.2", 00:42:36.024 "hostnqn": "nqn.2021-12.io.spdk:test", 00:42:36.024 "adrfam": "ipv4", 00:42:36.024 "trsvcid": "8009", 00:42:36.024 "wait_for_attach": true 00:42:36.024 } 00:42:36.024 } 00:42:36.024 Got JSON-RPC error response 00:42:36.024 GoRPCClient: error on JSON-RPC call 00:42:36.024 08:39:09 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:42:36.024 08:39:09 -- common/autotest_common.sh@643 -- # es=1 00:42:36.024 08:39:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:42:36.024 08:39:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:42:36.024 08:39:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:42:36.024 08:39:09 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:42:36.024 08:39:09 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:42:36.024 08:39:09 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:42:36.024 08:39:09 -- host/discovery.sh@67 -- # xargs 00:42:36.024 08:39:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:36.024 08:39:09 -- common/autotest_common.sh@10 -- # set +x 00:42:36.024 08:39:09 -- host/discovery.sh@67 -- # sort 00:42:36.024 08:39:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:36.024 08:39:09 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:42:36.024 08:39:09 -- host/discovery.sh@147 -- # get_bdev_list 00:42:36.024 08:39:09 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:42:36.024 08:39:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:36.024 08:39:09 -- common/autotest_common.sh@10 -- # set +x 00:42:36.024 08:39:09 -- host/discovery.sh@55 -- # xargs 00:42:36.024 08:39:09 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:42:36.024 08:39:09 -- host/discovery.sh@55 -- # sort 00:42:36.024 08:39:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:36.024 08:39:09 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:42:36.024 08:39:09 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:42:36.024 08:39:09 -- common/autotest_common.sh@640 -- # local es=0 00:42:36.024 08:39:09 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:42:36.024 08:39:09 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:42:36.024 08:39:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:42:36.024 08:39:09 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:42:36.024 08:39:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:42:36.024 08:39:09 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:42:36.024 08:39:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:36.024 08:39:09 -- common/autotest_common.sh@10 -- # set +x 00:42:36.024 2024/04/17 08:39:09 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:42:36.024 request: 00:42:36.024 { 00:42:36.024 "method": "bdev_nvme_start_discovery", 00:42:36.024 "params": { 00:42:36.024 "name": "nvme_second", 00:42:36.024 "trtype": "tcp", 00:42:36.024 "traddr": "10.0.0.2", 00:42:36.024 "hostnqn": "nqn.2021-12.io.spdk:test", 00:42:36.024 "adrfam": "ipv4", 00:42:36.024 "trsvcid": "8009", 00:42:36.024 "wait_for_attach": true 00:42:36.024 } 00:42:36.024 } 00:42:36.024 Got JSON-RPC error response 00:42:36.024 GoRPCClient: error on JSON-RPC call 00:42:36.024 08:39:09 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:42:36.024 08:39:09 -- common/autotest_common.sh@643 -- # es=1 00:42:36.024 08:39:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:42:36.024 08:39:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:42:36.024 08:39:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:42:36.024 08:39:09 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:42:36.024 08:39:09 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:42:36.024 08:39:09 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:42:36.024 08:39:09 -- host/discovery.sh@67 -- # xargs 00:42:36.024 08:39:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:36.024 08:39:09 -- host/discovery.sh@67 -- # sort 00:42:36.024 08:39:09 -- common/autotest_common.sh@10 -- # set +x 00:42:36.024 08:39:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:36.024 08:39:09 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:42:36.024 08:39:09 -- host/discovery.sh@153 -- # get_bdev_list 00:42:36.024 08:39:09 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:42:36.024 08:39:09 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:42:36.024 08:39:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:36.024 08:39:09 -- host/discovery.sh@55 -- # xargs 00:42:36.024 08:39:09 -- common/autotest_common.sh@10 -- # set +x 00:42:36.024 08:39:09 -- host/discovery.sh@55 -- # sort 00:42:36.024 08:39:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:36.024 08:39:09 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:42:36.024 08:39:09 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:42:36.024 08:39:09 -- common/autotest_common.sh@640 -- # local es=0 00:42:36.024 08:39:09 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:42:36.024 08:39:09 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:42:36.024 08:39:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:42:36.024 08:39:09 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:42:36.024 08:39:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:42:36.024 08:39:09 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:42:36.024 08:39:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:36.024 08:39:09 -- common/autotest_common.sh@10 -- # set +x 00:42:37.433 [2024-04-17 08:39:10.347551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:37.433 [2024-04-17 08:39:10.347633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:37.433 [2024-04-17 08:39:10.347645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd0410 with addr=10.0.0.2, port=8010 00:42:37.433 [2024-04-17 08:39:10.347663] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:42:37.433 [2024-04-17 08:39:10.347669] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:42:37.433 [2024-04-17 08:39:10.347676] bdev_nvme.c:6815:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:42:38.372 [2024-04-17 08:39:11.345614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:38.372 [2024-04-17 08:39:11.345715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:42:38.372 [2024-04-17 08:39:11.345727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd0410 with addr=10.0.0.2, port=8010 00:42:38.372 [2024-04-17 08:39:11.345744] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:42:38.372 [2024-04-17 08:39:11.345750] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:42:38.372 [2024-04-17 08:39:11.345756] bdev_nvme.c:6815:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:42:39.309 [2024-04-17 08:39:12.343558] bdev_nvme.c:6796:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:42:39.309 2024/04/17 08:39:12 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:42:39.309 request: 00:42:39.309 { 00:42:39.309 "method": "bdev_nvme_start_discovery", 00:42:39.309 "params": { 00:42:39.309 "name": "nvme_second", 00:42:39.309 "trtype": "tcp", 00:42:39.309 "traddr": "10.0.0.2", 00:42:39.309 "hostnqn": "nqn.2021-12.io.spdk:test", 00:42:39.309 "adrfam": "ipv4", 00:42:39.309 "trsvcid": "8010", 00:42:39.309 "attach_timeout_ms": 3000 00:42:39.309 } 00:42:39.309 } 00:42:39.309 Got JSON-RPC error response 00:42:39.309 GoRPCClient: error on JSON-RPC call 00:42:39.309 08:39:12 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:42:39.309 08:39:12 -- common/autotest_common.sh@643 -- # es=1 00:42:39.309 08:39:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:42:39.309 08:39:12 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:42:39.309 08:39:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:42:39.309 08:39:12 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:42:39.309 08:39:12 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:42:39.309 08:39:12 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:42:39.309 08:39:12 -- host/discovery.sh@67 -- # xargs 00:42:39.309 08:39:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:39.309 08:39:12 -- host/discovery.sh@67 -- # sort 00:42:39.309 08:39:12 -- common/autotest_common.sh@10 -- # set +x 00:42:39.309 08:39:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:39.309 08:39:12 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:42:39.309 08:39:12 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:42:39.309 08:39:12 -- host/discovery.sh@162 -- # kill 83602 00:42:39.309 08:39:12 -- host/discovery.sh@163 -- # nvmftestfini 00:42:39.309 08:39:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:42:39.309 08:39:12 -- nvmf/common.sh@116 -- # sync 00:42:39.309 08:39:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:42:39.309 08:39:12 -- nvmf/common.sh@119 -- # set +e 00:42:39.309 08:39:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:42:39.309 08:39:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:42:39.309 rmmod nvme_tcp 00:42:39.309 rmmod nvme_fabrics 00:42:39.309 rmmod nvme_keyring 00:42:39.309 08:39:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:42:39.309 08:39:12 -- nvmf/common.sh@123 -- # set -e 00:42:39.309 08:39:12 -- nvmf/common.sh@124 -- # return 0 00:42:39.309 08:39:12 -- nvmf/common.sh@477 -- # '[' -n 83543 ']' 00:42:39.309 08:39:12 -- nvmf/common.sh@478 -- # killprocess 83543 00:42:39.309 08:39:12 -- common/autotest_common.sh@926 -- # '[' -z 83543 ']' 00:42:39.309 08:39:12 -- common/autotest_common.sh@930 -- # kill -0 83543 00:42:39.309 08:39:12 -- common/autotest_common.sh@931 -- # uname 00:42:39.309 08:39:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:42:39.309 08:39:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83543 00:42:39.309 killing process with pid 83543 00:42:39.309 08:39:12 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:42:39.309 08:39:12 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:42:39.309 08:39:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83543' 00:42:39.309 08:39:12 -- common/autotest_common.sh@945 -- # kill 83543 00:42:39.309 08:39:12 -- common/autotest_common.sh@950 -- # wait 83543 00:42:39.567 08:39:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:42:39.567 08:39:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:42:39.567 08:39:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:42:39.567 08:39:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:42:39.567 08:39:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:42:39.567 08:39:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:39.567 08:39:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:39.567 08:39:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:39.567 08:39:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:42:39.567 00:42:39.567 real 0m13.868s 00:42:39.567 user 0m27.016s 00:42:39.567 sys 0m1.711s 00:42:39.567 08:39:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:39.567 08:39:12 -- common/autotest_common.sh@10 -- # set +x 00:42:39.567 ************************************ 00:42:39.567 END TEST nvmf_discovery 00:42:39.567 ************************************ 00:42:39.567 08:39:12 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:42:39.567 08:39:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:42:39.567 08:39:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:42:39.567 08:39:12 -- common/autotest_common.sh@10 -- # set +x 00:42:39.567 ************************************ 00:42:39.567 START TEST nvmf_discovery_remove_ifc 00:42:39.567 ************************************ 00:42:39.568 08:39:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:42:39.828 * Looking for test storage... 00:42:39.828 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:42:39.828 08:39:12 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:42:39.828 08:39:12 -- nvmf/common.sh@7 -- # uname -s 00:42:39.828 08:39:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:39.828 08:39:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:39.828 08:39:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:39.828 08:39:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:39.828 08:39:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:39.828 08:39:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:39.828 08:39:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:39.828 08:39:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:39.828 08:39:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:39.828 08:39:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:39.828 08:39:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:42:39.828 08:39:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:42:39.828 08:39:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:39.828 08:39:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:39.828 08:39:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:42:39.828 08:39:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:39.828 08:39:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:39.828 08:39:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:39.828 08:39:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:39.828 08:39:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:39.828 08:39:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:39.828 08:39:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:39.828 08:39:13 -- paths/export.sh@5 -- # export PATH 00:42:39.828 08:39:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:39.828 08:39:13 -- nvmf/common.sh@46 -- # : 0 00:42:39.828 08:39:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:42:39.828 08:39:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:42:39.828 08:39:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:42:39.828 08:39:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:39.828 08:39:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:39.828 08:39:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:42:39.828 08:39:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:42:39.828 08:39:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:42:39.828 08:39:13 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:42:39.828 08:39:13 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:42:39.828 08:39:13 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:42:39.828 08:39:13 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:42:39.828 08:39:13 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:42:39.828 08:39:13 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:42:39.828 08:39:13 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:42:39.828 08:39:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:42:39.828 08:39:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:39.828 08:39:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:42:39.828 08:39:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:42:39.828 08:39:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:42:39.828 08:39:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:39.828 08:39:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:39.828 08:39:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:39.828 08:39:13 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:42:39.828 08:39:13 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:42:39.828 08:39:13 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:42:39.828 08:39:13 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:42:39.828 08:39:13 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:42:39.828 08:39:13 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:42:39.828 08:39:13 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:39.828 08:39:13 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:39.828 08:39:13 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:42:39.828 08:39:13 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:42:39.828 08:39:13 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:42:39.828 08:39:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:42:39.828 08:39:13 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:42:39.829 08:39:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:39.829 08:39:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:42:39.829 08:39:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:42:39.829 08:39:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:42:39.829 08:39:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:42:39.829 08:39:13 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:42:39.829 08:39:13 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:42:39.829 Cannot find device "nvmf_tgt_br" 00:42:39.829 08:39:13 -- nvmf/common.sh@154 -- # true 00:42:39.829 08:39:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:42:39.829 Cannot find device "nvmf_tgt_br2" 00:42:39.829 08:39:13 -- nvmf/common.sh@155 -- # true 00:42:39.829 08:39:13 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:42:39.829 08:39:13 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:42:39.829 Cannot find device "nvmf_tgt_br" 00:42:39.829 08:39:13 -- nvmf/common.sh@157 -- # true 00:42:39.829 08:39:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:42:39.829 Cannot find device "nvmf_tgt_br2" 00:42:39.829 08:39:13 -- nvmf/common.sh@158 -- # true 00:42:39.829 08:39:13 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:42:39.829 08:39:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:42:39.829 08:39:13 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:42:39.829 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:42:39.829 08:39:13 -- nvmf/common.sh@161 -- # true 00:42:39.829 08:39:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:42:39.829 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:42:39.829 08:39:13 -- nvmf/common.sh@162 -- # true 00:42:39.829 08:39:13 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:42:39.829 08:39:13 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:42:40.088 08:39:13 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:42:40.088 08:39:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:42:40.088 08:39:13 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:42:40.088 08:39:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:42:40.088 08:39:13 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:42:40.088 08:39:13 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:42:40.088 08:39:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:42:40.088 08:39:13 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:42:40.088 08:39:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:42:40.088 08:39:13 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:42:40.088 08:39:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:42:40.088 08:39:13 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:42:40.088 08:39:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:42:40.088 08:39:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:42:40.088 08:39:13 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:42:40.088 08:39:13 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:42:40.088 08:39:13 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:42:40.088 08:39:13 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:42:40.088 08:39:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:42:40.088 08:39:13 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:42:40.088 08:39:13 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:42:40.088 08:39:13 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:42:40.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:40.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:42:40.088 00:42:40.088 --- 10.0.0.2 ping statistics --- 00:42:40.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:40.088 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:42:40.088 08:39:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:42:40.089 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:42:40.089 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:42:40.089 00:42:40.089 --- 10.0.0.3 ping statistics --- 00:42:40.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:40.089 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:42:40.089 08:39:13 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:42:40.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:40.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:42:40.089 00:42:40.089 --- 10.0.0.1 ping statistics --- 00:42:40.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:40.089 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:42:40.089 08:39:13 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:40.089 08:39:13 -- nvmf/common.sh@421 -- # return 0 00:42:40.089 08:39:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:42:40.089 08:39:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:40.089 08:39:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:42:40.089 08:39:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:42:40.089 08:39:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:40.089 08:39:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:42:40.089 08:39:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:42:40.089 08:39:13 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:42:40.089 08:39:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:42:40.089 08:39:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:42:40.089 08:39:13 -- common/autotest_common.sh@10 -- # set +x 00:42:40.089 08:39:13 -- nvmf/common.sh@469 -- # nvmfpid=84101 00:42:40.089 08:39:13 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:42:40.089 08:39:13 -- nvmf/common.sh@470 -- # waitforlisten 84101 00:42:40.089 08:39:13 -- common/autotest_common.sh@819 -- # '[' -z 84101 ']' 00:42:40.089 08:39:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:40.089 08:39:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:42:40.089 08:39:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:40.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:40.089 08:39:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:42:40.089 08:39:13 -- common/autotest_common.sh@10 -- # set +x 00:42:40.089 [2024-04-17 08:39:13.387117] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:42:40.089 [2024-04-17 08:39:13.387197] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:40.347 [2024-04-17 08:39:13.525627] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:40.347 [2024-04-17 08:39:13.629173] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:42:40.347 [2024-04-17 08:39:13.629311] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:40.347 [2024-04-17 08:39:13.629317] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:40.347 [2024-04-17 08:39:13.629322] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:40.347 [2024-04-17 08:39:13.629344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:42:41.300 08:39:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:42:41.300 08:39:14 -- common/autotest_common.sh@852 -- # return 0 00:42:41.300 08:39:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:42:41.300 08:39:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:42:41.300 08:39:14 -- common/autotest_common.sh@10 -- # set +x 00:42:41.300 08:39:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:41.300 08:39:14 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:42:41.300 08:39:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:41.300 08:39:14 -- common/autotest_common.sh@10 -- # set +x 00:42:41.300 [2024-04-17 08:39:14.344122] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:41.300 [2024-04-17 08:39:14.352220] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:42:41.300 null0 00:42:41.300 [2024-04-17 08:39:14.384104] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:41.300 08:39:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:41.300 08:39:14 -- host/discovery_remove_ifc.sh@59 -- # hostpid=84151 00:42:41.300 08:39:14 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:42:41.300 08:39:14 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 84151 /tmp/host.sock 00:42:41.300 08:39:14 -- common/autotest_common.sh@819 -- # '[' -z 84151 ']' 00:42:41.300 08:39:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:42:41.300 08:39:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:42:41.300 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:42:41.300 08:39:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:42:41.300 08:39:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:42:41.300 08:39:14 -- common/autotest_common.sh@10 -- # set +x 00:42:41.300 [2024-04-17 08:39:14.463710] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:42:41.300 [2024-04-17 08:39:14.463784] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84151 ] 00:42:41.300 [2024-04-17 08:39:14.601967] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:41.559 [2024-04-17 08:39:14.708689] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:42:41.559 [2024-04-17 08:39:14.708834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:42:42.126 08:39:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:42:42.126 08:39:15 -- common/autotest_common.sh@852 -- # return 0 00:42:42.126 08:39:15 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:42:42.126 08:39:15 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:42:42.126 08:39:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:42.126 08:39:15 -- common/autotest_common.sh@10 -- # set +x 00:42:42.126 08:39:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:42.126 08:39:15 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:42:42.126 08:39:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:42.126 08:39:15 -- common/autotest_common.sh@10 -- # set +x 00:42:42.385 08:39:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:42.385 08:39:15 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:42:42.385 08:39:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:42.385 08:39:15 -- common/autotest_common.sh@10 -- # set +x 00:42:43.320 [2024-04-17 08:39:16.502761] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:42:43.320 [2024-04-17 08:39:16.502805] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:42:43.320 [2024-04-17 08:39:16.502820] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:42:43.320 [2024-04-17 08:39:16.588728] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:42:43.320 [2024-04-17 08:39:16.644465] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:42:43.320 [2024-04-17 08:39:16.644536] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:42:43.320 [2024-04-17 08:39:16.644557] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:42:43.320 [2024-04-17 08:39:16.644575] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:42:43.320 [2024-04-17 08:39:16.644602] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:42:43.320 08:39:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:43.320 08:39:16 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:42:43.320 08:39:16 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:42:43.320 [2024-04-17 08:39:16.651316] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x17923e0 was disconnected and freed. delete nvme_qpair. 00:42:43.578 08:39:16 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:42:43.578 08:39:16 -- host/discovery_remove_ifc.sh@29 -- # sort 00:42:43.578 08:39:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:43.578 08:39:16 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:42:43.578 08:39:16 -- common/autotest_common.sh@10 -- # set +x 00:42:43.578 08:39:16 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:42:43.578 08:39:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:43.578 08:39:16 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:42:43.578 08:39:16 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:42:43.578 08:39:16 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:42:43.578 08:39:16 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:42:43.578 08:39:16 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:42:43.578 08:39:16 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:42:43.578 08:39:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:43.578 08:39:16 -- common/autotest_common.sh@10 -- # set +x 00:42:43.578 08:39:16 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:42:43.578 08:39:16 -- host/discovery_remove_ifc.sh@29 -- # sort 00:42:43.578 08:39:16 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:42:43.578 08:39:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:43.578 08:39:16 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:42:43.578 08:39:16 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:42:44.522 08:39:17 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:42:44.522 08:39:17 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:42:44.522 08:39:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:44.522 08:39:17 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:42:44.522 08:39:17 -- common/autotest_common.sh@10 -- # set +x 00:42:44.522 08:39:17 -- host/discovery_remove_ifc.sh@29 -- # sort 00:42:44.522 08:39:17 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:42:44.522 08:39:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:44.522 08:39:17 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:42:44.522 08:39:17 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:42:45.899 08:39:18 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:42:45.899 08:39:18 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:42:45.899 08:39:18 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:42:45.899 08:39:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:45.899 08:39:18 -- common/autotest_common.sh@10 -- # set +x 00:42:45.899 08:39:18 -- host/discovery_remove_ifc.sh@29 -- # sort 00:42:45.899 08:39:18 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:42:45.899 08:39:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:45.899 08:39:18 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:42:45.899 08:39:18 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:42:46.833 08:39:19 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:42:46.833 08:39:19 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:42:46.833 08:39:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:46.833 08:39:19 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:42:46.833 08:39:19 -- common/autotest_common.sh@10 -- # set +x 00:42:46.833 08:39:19 -- host/discovery_remove_ifc.sh@29 -- # sort 00:42:46.833 08:39:19 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:42:46.833 08:39:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:46.833 08:39:19 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:42:46.833 08:39:19 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:42:47.808 08:39:20 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:42:47.808 08:39:20 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:42:47.808 08:39:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:47.808 08:39:20 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:42:47.808 08:39:20 -- common/autotest_common.sh@10 -- # set +x 00:42:47.808 08:39:20 -- host/discovery_remove_ifc.sh@29 -- # sort 00:42:47.808 08:39:20 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:42:47.808 08:39:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:47.808 08:39:21 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:42:47.808 08:39:21 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:42:48.747 08:39:22 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:42:48.747 08:39:22 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:42:48.747 08:39:22 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:42:48.747 08:39:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:48.747 08:39:22 -- common/autotest_common.sh@10 -- # set +x 00:42:48.747 08:39:22 -- host/discovery_remove_ifc.sh@29 -- # sort 00:42:48.747 08:39:22 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:42:48.747 08:39:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:48.747 [2024-04-17 08:39:22.061949] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:42:48.747 [2024-04-17 08:39:22.062003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:42:48.747 [2024-04-17 08:39:22.062014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:48.747 [2024-04-17 08:39:22.062023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:42:48.747 [2024-04-17 08:39:22.062029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:48.747 [2024-04-17 08:39:22.062036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:42:48.747 [2024-04-17 08:39:22.062041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:48.747 [2024-04-17 08:39:22.062048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:42:48.747 [2024-04-17 08:39:22.062054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:48.747 [2024-04-17 08:39:22.062060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:42:48.747 [2024-04-17 08:39:22.062066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:48.747 [2024-04-17 08:39:22.062072] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175bc40 is same with the state(5) to be set 00:42:48.747 [2024-04-17 08:39:22.071918] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175bc40 (9): Bad file descriptor 00:42:49.007 08:39:22 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:42:49.007 08:39:22 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:42:49.007 [2024-04-17 08:39:22.081919] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:42:49.945 [2024-04-17 08:39:23.088512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:42:49.945 08:39:23 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:42:49.945 08:39:23 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:42:49.945 08:39:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:49.945 08:39:23 -- common/autotest_common.sh@10 -- # set +x 00:42:49.945 08:39:23 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:42:49.945 08:39:23 -- host/discovery_remove_ifc.sh@29 -- # sort 00:42:49.945 08:39:23 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:42:50.885 [2024-04-17 08:39:24.112503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:42:50.885 [2024-04-17 08:39:24.112638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x175bc40 with addr=10.0.0.2, port=4420 00:42:50.885 [2024-04-17 08:39:24.112675] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175bc40 is same with the state(5) to be set 00:42:50.885 [2024-04-17 08:39:24.113882] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175bc40 (9): Bad file descriptor 00:42:50.885 [2024-04-17 08:39:24.114004] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:42:50.885 [2024-04-17 08:39:24.114095] bdev_nvme.c:6504:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:42:50.885 [2024-04-17 08:39:24.114207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:42:50.885 [2024-04-17 08:39:24.114253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:50.885 [2024-04-17 08:39:24.114285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:42:50.885 [2024-04-17 08:39:24.114332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:50.885 [2024-04-17 08:39:24.114374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:42:50.885 [2024-04-17 08:39:24.114436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:50.885 [2024-04-17 08:39:24.114463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:42:50.885 [2024-04-17 08:39:24.114484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:50.885 [2024-04-17 08:39:24.114509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:42:50.885 [2024-04-17 08:39:24.114531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:50.885 [2024-04-17 08:39:24.114554] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:42:50.885 [2024-04-17 08:39:24.114589] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17031c0 (9): Bad file descriptor 00:42:50.885 [2024-04-17 08:39:24.115037] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:42:50.885 [2024-04-17 08:39:24.115097] nvme_ctrlr.c:1135:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:42:50.885 08:39:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:50.885 08:39:24 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:42:50.885 08:39:24 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:42:51.822 08:39:25 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:42:51.822 08:39:25 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:42:51.822 08:39:25 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:42:51.822 08:39:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:51.822 08:39:25 -- common/autotest_common.sh@10 -- # set +x 00:42:51.822 08:39:25 -- host/discovery_remove_ifc.sh@29 -- # sort 00:42:51.822 08:39:25 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:42:51.822 08:39:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:52.082 08:39:25 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:42:52.082 08:39:25 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:42:52.082 08:39:25 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:42:52.082 08:39:25 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:42:52.082 08:39:25 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:42:52.082 08:39:25 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:42:52.082 08:39:25 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:42:52.082 08:39:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:52.082 08:39:25 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:42:52.082 08:39:25 -- common/autotest_common.sh@10 -- # set +x 00:42:52.082 08:39:25 -- host/discovery_remove_ifc.sh@29 -- # sort 00:42:52.082 08:39:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:52.082 08:39:25 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:42:52.082 08:39:25 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:42:53.021 [2024-04-17 08:39:26.115505] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:42:53.021 [2024-04-17 08:39:26.115545] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:42:53.021 [2024-04-17 08:39:26.115560] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:42:53.021 [2024-04-17 08:39:26.203488] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:42:53.021 [2024-04-17 08:39:26.265430] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:42:53.021 [2024-04-17 08:39:26.265495] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:42:53.021 [2024-04-17 08:39:26.265515] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:42:53.021 [2024-04-17 08:39:26.265530] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:42:53.021 [2024-04-17 08:39:26.265538] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:42:53.021 [2024-04-17 08:39:26.274166] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x174c3d0 was disconnected and freed. delete nvme_qpair. 00:42:53.021 08:39:26 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:42:53.021 08:39:26 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:42:53.021 08:39:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:53.021 08:39:26 -- common/autotest_common.sh@10 -- # set +x 00:42:53.021 08:39:26 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:42:53.021 08:39:26 -- host/discovery_remove_ifc.sh@29 -- # sort 00:42:53.021 08:39:26 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:42:53.021 08:39:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:53.021 08:39:26 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:42:53.021 08:39:26 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:42:53.021 08:39:26 -- host/discovery_remove_ifc.sh@90 -- # killprocess 84151 00:42:53.021 08:39:26 -- common/autotest_common.sh@926 -- # '[' -z 84151 ']' 00:42:53.021 08:39:26 -- common/autotest_common.sh@930 -- # kill -0 84151 00:42:53.021 08:39:26 -- common/autotest_common.sh@931 -- # uname 00:42:53.021 08:39:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:42:53.021 08:39:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84151 00:42:53.281 killing process with pid 84151 00:42:53.281 08:39:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:42:53.281 08:39:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:42:53.281 08:39:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84151' 00:42:53.281 08:39:26 -- common/autotest_common.sh@945 -- # kill 84151 00:42:53.281 08:39:26 -- common/autotest_common.sh@950 -- # wait 84151 00:42:53.281 08:39:26 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:42:53.281 08:39:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:42:53.281 08:39:26 -- nvmf/common.sh@116 -- # sync 00:42:53.281 08:39:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:42:53.281 08:39:26 -- nvmf/common.sh@119 -- # set +e 00:42:53.281 08:39:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:42:53.281 08:39:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:42:53.281 rmmod nvme_tcp 00:42:53.540 rmmod nvme_fabrics 00:42:53.541 rmmod nvme_keyring 00:42:53.541 08:39:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:42:53.541 08:39:26 -- nvmf/common.sh@123 -- # set -e 00:42:53.541 08:39:26 -- nvmf/common.sh@124 -- # return 0 00:42:53.541 08:39:26 -- nvmf/common.sh@477 -- # '[' -n 84101 ']' 00:42:53.541 08:39:26 -- nvmf/common.sh@478 -- # killprocess 84101 00:42:53.541 08:39:26 -- common/autotest_common.sh@926 -- # '[' -z 84101 ']' 00:42:53.541 08:39:26 -- common/autotest_common.sh@930 -- # kill -0 84101 00:42:53.541 08:39:26 -- common/autotest_common.sh@931 -- # uname 00:42:53.541 08:39:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:42:53.541 08:39:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84101 00:42:53.541 killing process with pid 84101 00:42:53.541 08:39:26 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:42:53.541 08:39:26 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:42:53.541 08:39:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84101' 00:42:53.541 08:39:26 -- common/autotest_common.sh@945 -- # kill 84101 00:42:53.541 08:39:26 -- common/autotest_common.sh@950 -- # wait 84101 00:42:53.800 08:39:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:42:53.800 08:39:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:42:53.800 08:39:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:42:53.800 08:39:26 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:42:53.800 08:39:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:42:53.800 08:39:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:53.800 08:39:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:53.800 08:39:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:53.800 08:39:26 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:42:53.800 00:42:53.800 real 0m14.120s 00:42:53.800 user 0m24.355s 00:42:53.800 sys 0m1.504s 00:42:53.800 08:39:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:53.800 08:39:26 -- common/autotest_common.sh@10 -- # set +x 00:42:53.800 ************************************ 00:42:53.800 END TEST nvmf_discovery_remove_ifc 00:42:53.800 ************************************ 00:42:53.800 08:39:27 -- nvmf/nvmf.sh@105 -- # [[ tcp == \t\c\p ]] 00:42:53.800 08:39:27 -- nvmf/nvmf.sh@106 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:42:53.800 08:39:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:42:53.800 08:39:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:42:53.800 08:39:27 -- common/autotest_common.sh@10 -- # set +x 00:42:53.800 ************************************ 00:42:53.800 START TEST nvmf_digest 00:42:53.800 ************************************ 00:42:53.800 08:39:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:42:54.060 * Looking for test storage... 00:42:54.060 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:42:54.060 08:39:27 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:42:54.060 08:39:27 -- nvmf/common.sh@7 -- # uname -s 00:42:54.060 08:39:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:54.060 08:39:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:54.060 08:39:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:54.060 08:39:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:54.060 08:39:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:54.060 08:39:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:54.060 08:39:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:54.060 08:39:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:54.060 08:39:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:54.060 08:39:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:54.060 08:39:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:42:54.060 08:39:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:42:54.060 08:39:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:54.060 08:39:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:54.060 08:39:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:42:54.060 08:39:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:54.060 08:39:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:54.060 08:39:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:54.060 08:39:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:54.060 08:39:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:54.060 08:39:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:54.061 08:39:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:54.061 08:39:27 -- paths/export.sh@5 -- # export PATH 00:42:54.061 08:39:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:54.061 08:39:27 -- nvmf/common.sh@46 -- # : 0 00:42:54.061 08:39:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:42:54.061 08:39:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:42:54.061 08:39:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:42:54.061 08:39:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:54.061 08:39:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:54.061 08:39:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:42:54.061 08:39:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:42:54.061 08:39:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:42:54.061 08:39:27 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:42:54.061 08:39:27 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:42:54.061 08:39:27 -- host/digest.sh@16 -- # runtime=2 00:42:54.061 08:39:27 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:42:54.061 08:39:27 -- host/digest.sh@132 -- # nvmftestinit 00:42:54.061 08:39:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:42:54.061 08:39:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:54.061 08:39:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:42:54.061 08:39:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:42:54.061 08:39:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:42:54.061 08:39:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:54.061 08:39:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:54.061 08:39:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:54.061 08:39:27 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:42:54.061 08:39:27 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:42:54.061 08:39:27 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:42:54.061 08:39:27 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:42:54.061 08:39:27 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:42:54.061 08:39:27 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:42:54.061 08:39:27 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:54.061 08:39:27 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:54.061 08:39:27 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:42:54.061 08:39:27 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:42:54.061 08:39:27 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:42:54.061 08:39:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:42:54.061 08:39:27 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:42:54.061 08:39:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:54.061 08:39:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:42:54.061 08:39:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:42:54.061 08:39:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:42:54.061 08:39:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:42:54.061 08:39:27 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:42:54.061 08:39:27 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:42:54.061 Cannot find device "nvmf_tgt_br" 00:42:54.061 08:39:27 -- nvmf/common.sh@154 -- # true 00:42:54.061 08:39:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:42:54.061 Cannot find device "nvmf_tgt_br2" 00:42:54.061 08:39:27 -- nvmf/common.sh@155 -- # true 00:42:54.061 08:39:27 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:42:54.061 08:39:27 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:42:54.061 Cannot find device "nvmf_tgt_br" 00:42:54.061 08:39:27 -- nvmf/common.sh@157 -- # true 00:42:54.061 08:39:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:42:54.061 Cannot find device "nvmf_tgt_br2" 00:42:54.061 08:39:27 -- nvmf/common.sh@158 -- # true 00:42:54.061 08:39:27 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:42:54.061 08:39:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:42:54.061 08:39:27 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:42:54.061 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:42:54.061 08:39:27 -- nvmf/common.sh@161 -- # true 00:42:54.061 08:39:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:42:54.061 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:42:54.061 08:39:27 -- nvmf/common.sh@162 -- # true 00:42:54.061 08:39:27 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:42:54.061 08:39:27 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:42:54.061 08:39:27 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:42:54.320 08:39:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:42:54.320 08:39:27 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:42:54.320 08:39:27 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:42:54.320 08:39:27 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:42:54.320 08:39:27 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:42:54.320 08:39:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:42:54.320 08:39:27 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:42:54.320 08:39:27 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:42:54.320 08:39:27 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:42:54.320 08:39:27 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:42:54.320 08:39:27 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:42:54.320 08:39:27 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:42:54.320 08:39:27 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:42:54.320 08:39:27 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:42:54.320 08:39:27 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:42:54.320 08:39:27 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:42:54.320 08:39:27 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:42:54.320 08:39:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:42:54.320 08:39:27 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:42:54.320 08:39:27 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:42:54.320 08:39:27 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:42:54.320 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:54.320 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:42:54.320 00:42:54.320 --- 10.0.0.2 ping statistics --- 00:42:54.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:54.320 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:42:54.320 08:39:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:42:54.320 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:42:54.320 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:42:54.320 00:42:54.320 --- 10.0.0.3 ping statistics --- 00:42:54.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:54.320 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:42:54.320 08:39:27 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:42:54.320 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:54.320 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:42:54.320 00:42:54.320 --- 10.0.0.1 ping statistics --- 00:42:54.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:54.320 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:42:54.320 08:39:27 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:54.320 08:39:27 -- nvmf/common.sh@421 -- # return 0 00:42:54.320 08:39:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:42:54.320 08:39:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:54.320 08:39:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:42:54.320 08:39:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:42:54.320 08:39:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:54.320 08:39:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:42:54.320 08:39:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:42:54.320 08:39:27 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:42:54.320 08:39:27 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:42:54.320 08:39:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:42:54.320 08:39:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:42:54.320 08:39:27 -- common/autotest_common.sh@10 -- # set +x 00:42:54.320 ************************************ 00:42:54.320 START TEST nvmf_digest_clean 00:42:54.320 ************************************ 00:42:54.320 08:39:27 -- common/autotest_common.sh@1104 -- # run_digest 00:42:54.320 08:39:27 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:42:54.320 08:39:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:42:54.320 08:39:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:42:54.320 08:39:27 -- common/autotest_common.sh@10 -- # set +x 00:42:54.320 08:39:27 -- nvmf/common.sh@469 -- # nvmfpid=84564 00:42:54.320 08:39:27 -- nvmf/common.sh@470 -- # waitforlisten 84564 00:42:54.320 08:39:27 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:42:54.320 08:39:27 -- common/autotest_common.sh@819 -- # '[' -z 84564 ']' 00:42:54.320 08:39:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:54.320 08:39:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:42:54.320 08:39:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:54.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:54.320 08:39:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:42:54.320 08:39:27 -- common/autotest_common.sh@10 -- # set +x 00:42:54.320 [2024-04-17 08:39:27.588774] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:42:54.320 [2024-04-17 08:39:27.588846] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:54.578 [2024-04-17 08:39:27.727871] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:54.579 [2024-04-17 08:39:27.828359] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:42:54.579 [2024-04-17 08:39:27.828520] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:54.579 [2024-04-17 08:39:27.828528] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:54.579 [2024-04-17 08:39:27.828534] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:54.579 [2024-04-17 08:39:27.828555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:42:55.148 08:39:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:42:55.148 08:39:28 -- common/autotest_common.sh@852 -- # return 0 00:42:55.148 08:39:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:42:55.148 08:39:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:42:55.148 08:39:28 -- common/autotest_common.sh@10 -- # set +x 00:42:55.408 08:39:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:55.408 08:39:28 -- host/digest.sh@120 -- # common_target_config 00:42:55.408 08:39:28 -- host/digest.sh@43 -- # rpc_cmd 00:42:55.408 08:39:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:42:55.408 08:39:28 -- common/autotest_common.sh@10 -- # set +x 00:42:55.408 null0 00:42:55.408 [2024-04-17 08:39:28.593627] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:55.408 [2024-04-17 08:39:28.617672] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:55.408 08:39:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:42:55.408 08:39:28 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:42:55.408 08:39:28 -- host/digest.sh@77 -- # local rw bs qd 00:42:55.408 08:39:28 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:42:55.408 08:39:28 -- host/digest.sh@80 -- # rw=randread 00:42:55.408 08:39:28 -- host/digest.sh@80 -- # bs=4096 00:42:55.408 08:39:28 -- host/digest.sh@80 -- # qd=128 00:42:55.408 08:39:28 -- host/digest.sh@82 -- # bperfpid=84614 00:42:55.408 08:39:28 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:42:55.408 08:39:28 -- host/digest.sh@83 -- # waitforlisten 84614 /var/tmp/bperf.sock 00:42:55.408 08:39:28 -- common/autotest_common.sh@819 -- # '[' -z 84614 ']' 00:42:55.408 08:39:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:55.408 08:39:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:42:55.408 08:39:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:55.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:55.408 08:39:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:42:55.408 08:39:28 -- common/autotest_common.sh@10 -- # set +x 00:42:55.408 [2024-04-17 08:39:28.675803] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:42:55.408 [2024-04-17 08:39:28.675879] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84614 ] 00:42:55.667 [2024-04-17 08:39:28.813680] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:55.667 [2024-04-17 08:39:28.910388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:42:56.235 08:39:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:42:56.235 08:39:29 -- common/autotest_common.sh@852 -- # return 0 00:42:56.235 08:39:29 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:42:56.235 08:39:29 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:42:56.235 08:39:29 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:42:56.495 08:39:29 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:56.495 08:39:29 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:56.755 nvme0n1 00:42:56.755 08:39:30 -- host/digest.sh@91 -- # bperf_py perform_tests 00:42:56.755 08:39:30 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:57.014 Running I/O for 2 seconds... 00:42:58.976 00:42:58.976 Latency(us) 00:42:58.976 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:58.976 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:42:58.976 nvme0n1 : 2.00 21066.93 82.29 0.00 0.00 6070.67 2375.32 18315.74 00:42:58.976 =================================================================================================================== 00:42:58.976 Total : 21066.93 82.29 0.00 0.00 6070.67 2375.32 18315.74 00:42:58.976 0 00:42:58.976 08:39:32 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:42:58.976 08:39:32 -- host/digest.sh@92 -- # get_accel_stats 00:42:58.976 08:39:32 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:42:58.976 08:39:32 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:42:58.976 | select(.opcode=="crc32c") 00:42:58.976 | "\(.module_name) \(.executed)"' 00:42:58.976 08:39:32 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:42:59.235 08:39:32 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:42:59.235 08:39:32 -- host/digest.sh@93 -- # exp_module=software 00:42:59.235 08:39:32 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:42:59.235 08:39:32 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:42:59.235 08:39:32 -- host/digest.sh@97 -- # killprocess 84614 00:42:59.235 08:39:32 -- common/autotest_common.sh@926 -- # '[' -z 84614 ']' 00:42:59.235 08:39:32 -- common/autotest_common.sh@930 -- # kill -0 84614 00:42:59.235 08:39:32 -- common/autotest_common.sh@931 -- # uname 00:42:59.235 08:39:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:42:59.235 08:39:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84614 00:42:59.235 killing process with pid 84614 00:42:59.235 Received shutdown signal, test time was about 2.000000 seconds 00:42:59.235 00:42:59.235 Latency(us) 00:42:59.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:59.235 =================================================================================================================== 00:42:59.235 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:59.235 08:39:32 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:42:59.235 08:39:32 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:42:59.235 08:39:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84614' 00:42:59.235 08:39:32 -- common/autotest_common.sh@945 -- # kill 84614 00:42:59.235 08:39:32 -- common/autotest_common.sh@950 -- # wait 84614 00:42:59.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:59.495 08:39:32 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:42:59.495 08:39:32 -- host/digest.sh@77 -- # local rw bs qd 00:42:59.495 08:39:32 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:42:59.495 08:39:32 -- host/digest.sh@80 -- # rw=randread 00:42:59.495 08:39:32 -- host/digest.sh@80 -- # bs=131072 00:42:59.495 08:39:32 -- host/digest.sh@80 -- # qd=16 00:42:59.495 08:39:32 -- host/digest.sh@82 -- # bperfpid=84704 00:42:59.495 08:39:32 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:42:59.495 08:39:32 -- host/digest.sh@83 -- # waitforlisten 84704 /var/tmp/bperf.sock 00:42:59.495 08:39:32 -- common/autotest_common.sh@819 -- # '[' -z 84704 ']' 00:42:59.495 08:39:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:59.495 08:39:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:42:59.495 08:39:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:59.495 08:39:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:42:59.495 08:39:32 -- common/autotest_common.sh@10 -- # set +x 00:42:59.495 I/O size of 131072 is greater than zero copy threshold (65536). 00:42:59.495 Zero copy mechanism will not be used. 00:42:59.495 [2024-04-17 08:39:32.753121] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:42:59.495 [2024-04-17 08:39:32.753213] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84704 ] 00:42:59.755 [2024-04-17 08:39:32.897870] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:59.755 [2024-04-17 08:39:33.001242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:43:00.691 08:39:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:43:00.691 08:39:33 -- common/autotest_common.sh@852 -- # return 0 00:43:00.691 08:39:33 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:43:00.691 08:39:33 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:43:00.691 08:39:33 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:43:00.691 08:39:33 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:43:00.691 08:39:33 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:43:00.949 nvme0n1 00:43:00.949 08:39:34 -- host/digest.sh@91 -- # bperf_py perform_tests 00:43:00.949 08:39:34 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:01.208 I/O size of 131072 is greater than zero copy threshold (65536). 00:43:01.208 Zero copy mechanism will not be used. 00:43:01.208 Running I/O for 2 seconds... 00:43:03.124 00:43:03.124 Latency(us) 00:43:03.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:03.124 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:43:03.124 nvme0n1 : 2.00 8234.56 1029.32 0.00 0.00 1940.26 597.41 8299.32 00:43:03.124 =================================================================================================================== 00:43:03.124 Total : 8234.56 1029.32 0.00 0.00 1940.26 597.41 8299.32 00:43:03.124 0 00:43:03.124 08:39:36 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:43:03.124 08:39:36 -- host/digest.sh@92 -- # get_accel_stats 00:43:03.124 08:39:36 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:43:03.124 08:39:36 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:43:03.124 08:39:36 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:43:03.124 | select(.opcode=="crc32c") 00:43:03.124 | "\(.module_name) \(.executed)"' 00:43:03.384 08:39:36 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:43:03.384 08:39:36 -- host/digest.sh@93 -- # exp_module=software 00:43:03.384 08:39:36 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:43:03.384 08:39:36 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:43:03.384 08:39:36 -- host/digest.sh@97 -- # killprocess 84704 00:43:03.384 08:39:36 -- common/autotest_common.sh@926 -- # '[' -z 84704 ']' 00:43:03.384 08:39:36 -- common/autotest_common.sh@930 -- # kill -0 84704 00:43:03.384 08:39:36 -- common/autotest_common.sh@931 -- # uname 00:43:03.384 08:39:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:43:03.642 08:39:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84704 00:43:03.642 killing process with pid 84704 00:43:03.642 Received shutdown signal, test time was about 2.000000 seconds 00:43:03.642 00:43:03.642 Latency(us) 00:43:03.642 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:03.642 =================================================================================================================== 00:43:03.642 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:03.642 08:39:36 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:43:03.642 08:39:36 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:43:03.642 08:39:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84704' 00:43:03.642 08:39:36 -- common/autotest_common.sh@945 -- # kill 84704 00:43:03.642 08:39:36 -- common/autotest_common.sh@950 -- # wait 84704 00:43:03.642 08:39:36 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:43:03.642 08:39:36 -- host/digest.sh@77 -- # local rw bs qd 00:43:03.642 08:39:36 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:43:03.642 08:39:36 -- host/digest.sh@80 -- # rw=randwrite 00:43:03.642 08:39:36 -- host/digest.sh@80 -- # bs=4096 00:43:03.642 08:39:36 -- host/digest.sh@80 -- # qd=128 00:43:03.642 08:39:36 -- host/digest.sh@82 -- # bperfpid=84789 00:43:03.642 08:39:36 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:43:03.642 08:39:36 -- host/digest.sh@83 -- # waitforlisten 84789 /var/tmp/bperf.sock 00:43:03.642 08:39:36 -- common/autotest_common.sh@819 -- # '[' -z 84789 ']' 00:43:03.642 08:39:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:03.642 08:39:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:43:03.642 08:39:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:03.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:03.642 08:39:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:43:03.642 08:39:36 -- common/autotest_common.sh@10 -- # set +x 00:43:03.900 [2024-04-17 08:39:37.010182] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:43:03.900 [2024-04-17 08:39:37.010260] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84789 ] 00:43:03.900 [2024-04-17 08:39:37.146770] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:04.159 [2024-04-17 08:39:37.247464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:43:04.727 08:39:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:43:04.727 08:39:37 -- common/autotest_common.sh@852 -- # return 0 00:43:04.727 08:39:37 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:43:04.727 08:39:37 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:43:04.727 08:39:37 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:43:05.031 08:39:38 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:43:05.031 08:39:38 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:43:05.308 nvme0n1 00:43:05.308 08:39:38 -- host/digest.sh@91 -- # bperf_py perform_tests 00:43:05.308 08:39:38 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:05.566 Running I/O for 2 seconds... 00:43:07.471 00:43:07.471 Latency(us) 00:43:07.471 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:07.471 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:43:07.471 nvme0n1 : 2.01 25112.33 98.10 0.00 0.00 5092.04 2074.83 8699.98 00:43:07.471 =================================================================================================================== 00:43:07.471 Total : 25112.33 98.10 0.00 0.00 5092.04 2074.83 8699.98 00:43:07.471 0 00:43:07.471 08:39:40 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:43:07.471 08:39:40 -- host/digest.sh@92 -- # get_accel_stats 00:43:07.471 08:39:40 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:43:07.471 08:39:40 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:43:07.471 08:39:40 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:43:07.471 | select(.opcode=="crc32c") 00:43:07.471 | "\(.module_name) \(.executed)"' 00:43:07.730 08:39:40 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:43:07.730 08:39:40 -- host/digest.sh@93 -- # exp_module=software 00:43:07.730 08:39:40 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:43:07.730 08:39:40 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:43:07.730 08:39:40 -- host/digest.sh@97 -- # killprocess 84789 00:43:07.730 08:39:40 -- common/autotest_common.sh@926 -- # '[' -z 84789 ']' 00:43:07.730 08:39:40 -- common/autotest_common.sh@930 -- # kill -0 84789 00:43:07.730 08:39:40 -- common/autotest_common.sh@931 -- # uname 00:43:07.730 08:39:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:43:07.730 08:39:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84789 00:43:07.730 08:39:40 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:43:07.730 killing process with pid 84789 00:43:07.730 08:39:40 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:43:07.730 08:39:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84789' 00:43:07.730 08:39:40 -- common/autotest_common.sh@945 -- # kill 84789 00:43:07.730 Received shutdown signal, test time was about 2.000000 seconds 00:43:07.730 00:43:07.730 Latency(us) 00:43:07.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:07.730 =================================================================================================================== 00:43:07.730 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:07.730 08:39:40 -- common/autotest_common.sh@950 -- # wait 84789 00:43:07.989 08:39:41 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:43:07.989 08:39:41 -- host/digest.sh@77 -- # local rw bs qd 00:43:07.989 08:39:41 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:43:07.989 08:39:41 -- host/digest.sh@80 -- # rw=randwrite 00:43:07.989 08:39:41 -- host/digest.sh@80 -- # bs=131072 00:43:07.989 08:39:41 -- host/digest.sh@80 -- # qd=16 00:43:07.989 08:39:41 -- host/digest.sh@82 -- # bperfpid=84878 00:43:07.989 08:39:41 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:43:07.990 08:39:41 -- host/digest.sh@83 -- # waitforlisten 84878 /var/tmp/bperf.sock 00:43:07.990 08:39:41 -- common/autotest_common.sh@819 -- # '[' -z 84878 ']' 00:43:07.990 08:39:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:07.990 08:39:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:43:07.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:07.990 08:39:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:07.990 08:39:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:43:07.990 08:39:41 -- common/autotest_common.sh@10 -- # set +x 00:43:07.990 I/O size of 131072 is greater than zero copy threshold (65536). 00:43:07.990 Zero copy mechanism will not be used. 00:43:07.990 [2024-04-17 08:39:41.204346] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:43:07.990 [2024-04-17 08:39:41.204470] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84878 ] 00:43:08.248 [2024-04-17 08:39:41.348662] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:08.248 [2024-04-17 08:39:41.451095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:43:08.824 08:39:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:43:08.824 08:39:42 -- common/autotest_common.sh@852 -- # return 0 00:43:08.824 08:39:42 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:43:08.824 08:39:42 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:43:08.824 08:39:42 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:43:09.391 08:39:42 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:43:09.391 08:39:42 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:43:09.650 nvme0n1 00:43:09.650 08:39:42 -- host/digest.sh@91 -- # bperf_py perform_tests 00:43:09.650 08:39:42 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:09.650 I/O size of 131072 is greater than zero copy threshold (65536). 00:43:09.650 Zero copy mechanism will not be used. 00:43:09.650 Running I/O for 2 seconds... 00:43:11.554 00:43:11.554 Latency(us) 00:43:11.554 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:11.554 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:43:11.554 nvme0n1 : 2.00 9381.20 1172.65 0.00 0.00 1701.87 1237.74 4006.57 00:43:11.554 =================================================================================================================== 00:43:11.554 Total : 9381.20 1172.65 0.00 0.00 1701.87 1237.74 4006.57 00:43:11.554 0 00:43:11.554 08:39:44 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:43:11.554 08:39:44 -- host/digest.sh@92 -- # get_accel_stats 00:43:11.554 08:39:44 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:43:11.554 08:39:44 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:43:11.554 08:39:44 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:43:11.554 | select(.opcode=="crc32c") 00:43:11.554 | "\(.module_name) \(.executed)"' 00:43:11.816 08:39:45 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:43:11.816 08:39:45 -- host/digest.sh@93 -- # exp_module=software 00:43:11.816 08:39:45 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:43:11.816 08:39:45 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:43:11.816 08:39:45 -- host/digest.sh@97 -- # killprocess 84878 00:43:11.816 08:39:45 -- common/autotest_common.sh@926 -- # '[' -z 84878 ']' 00:43:11.816 08:39:45 -- common/autotest_common.sh@930 -- # kill -0 84878 00:43:11.816 08:39:45 -- common/autotest_common.sh@931 -- # uname 00:43:11.816 08:39:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:43:11.816 08:39:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84878 00:43:11.816 08:39:45 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:43:11.816 08:39:45 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:43:11.816 killing process with pid 84878 00:43:11.816 08:39:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84878' 00:43:11.816 08:39:45 -- common/autotest_common.sh@945 -- # kill 84878 00:43:11.816 Received shutdown signal, test time was about 2.000000 seconds 00:43:11.816 00:43:11.816 Latency(us) 00:43:11.816 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:11.816 =================================================================================================================== 00:43:11.816 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:11.816 08:39:45 -- common/autotest_common.sh@950 -- # wait 84878 00:43:12.078 08:39:45 -- host/digest.sh@126 -- # killprocess 84564 00:43:12.078 08:39:45 -- common/autotest_common.sh@926 -- # '[' -z 84564 ']' 00:43:12.078 08:39:45 -- common/autotest_common.sh@930 -- # kill -0 84564 00:43:12.078 08:39:45 -- common/autotest_common.sh@931 -- # uname 00:43:12.078 08:39:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:43:12.078 08:39:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84564 00:43:12.078 08:39:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:43:12.078 08:39:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:43:12.078 killing process with pid 84564 00:43:12.078 08:39:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84564' 00:43:12.078 08:39:45 -- common/autotest_common.sh@945 -- # kill 84564 00:43:12.078 08:39:45 -- common/autotest_common.sh@950 -- # wait 84564 00:43:12.337 00:43:12.337 real 0m18.081s 00:43:12.337 user 0m34.446s 00:43:12.337 sys 0m4.431s 00:43:12.337 08:39:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:12.337 08:39:45 -- common/autotest_common.sh@10 -- # set +x 00:43:12.337 ************************************ 00:43:12.337 END TEST nvmf_digest_clean 00:43:12.337 ************************************ 00:43:12.594 08:39:45 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:43:12.594 08:39:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:43:12.594 08:39:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:43:12.594 08:39:45 -- common/autotest_common.sh@10 -- # set +x 00:43:12.594 ************************************ 00:43:12.594 START TEST nvmf_digest_error 00:43:12.594 ************************************ 00:43:12.594 08:39:45 -- common/autotest_common.sh@1104 -- # run_digest_error 00:43:12.594 08:39:45 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:43:12.594 08:39:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:43:12.594 08:39:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:43:12.594 08:39:45 -- common/autotest_common.sh@10 -- # set +x 00:43:12.594 08:39:45 -- nvmf/common.sh@469 -- # nvmfpid=84990 00:43:12.594 08:39:45 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:43:12.594 08:39:45 -- nvmf/common.sh@470 -- # waitforlisten 84990 00:43:12.594 08:39:45 -- common/autotest_common.sh@819 -- # '[' -z 84990 ']' 00:43:12.594 08:39:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:12.594 08:39:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:43:12.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:12.594 08:39:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:12.594 08:39:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:43:12.594 08:39:45 -- common/autotest_common.sh@10 -- # set +x 00:43:12.594 [2024-04-17 08:39:45.750475] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:43:12.594 [2024-04-17 08:39:45.750555] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:12.594 [2024-04-17 08:39:45.891824] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:12.852 [2024-04-17 08:39:45.997509] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:43:12.852 [2024-04-17 08:39:45.997657] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:12.852 [2024-04-17 08:39:45.997670] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:12.852 [2024-04-17 08:39:45.997678] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:12.852 [2024-04-17 08:39:45.997703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:43:13.433 08:39:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:43:13.433 08:39:46 -- common/autotest_common.sh@852 -- # return 0 00:43:13.433 08:39:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:43:13.433 08:39:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:43:13.433 08:39:46 -- common/autotest_common.sh@10 -- # set +x 00:43:13.433 08:39:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:13.433 08:39:46 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:43:13.433 08:39:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:13.433 08:39:46 -- common/autotest_common.sh@10 -- # set +x 00:43:13.433 [2024-04-17 08:39:46.688851] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:43:13.433 08:39:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:13.433 08:39:46 -- host/digest.sh@104 -- # common_target_config 00:43:13.433 08:39:46 -- host/digest.sh@43 -- # rpc_cmd 00:43:13.433 08:39:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:13.433 08:39:46 -- common/autotest_common.sh@10 -- # set +x 00:43:13.697 null0 00:43:13.697 [2024-04-17 08:39:46.792586] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:13.697 [2024-04-17 08:39:46.816691] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:13.697 08:39:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:13.697 08:39:46 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:43:13.697 08:39:46 -- host/digest.sh@54 -- # local rw bs qd 00:43:13.697 08:39:46 -- host/digest.sh@56 -- # rw=randread 00:43:13.697 08:39:46 -- host/digest.sh@56 -- # bs=4096 00:43:13.697 08:39:46 -- host/digest.sh@56 -- # qd=128 00:43:13.697 08:39:46 -- host/digest.sh@58 -- # bperfpid=85040 00:43:13.697 08:39:46 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:43:13.697 08:39:46 -- host/digest.sh@60 -- # waitforlisten 85040 /var/tmp/bperf.sock 00:43:13.697 08:39:46 -- common/autotest_common.sh@819 -- # '[' -z 85040 ']' 00:43:13.697 08:39:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:13.697 08:39:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:43:13.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:13.697 08:39:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:13.697 08:39:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:43:13.697 08:39:46 -- common/autotest_common.sh@10 -- # set +x 00:43:13.698 [2024-04-17 08:39:46.878281] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:43:13.698 [2024-04-17 08:39:46.878359] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85040 ] 00:43:13.698 [2024-04-17 08:39:47.002571] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:13.960 [2024-04-17 08:39:47.105167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:43:14.526 08:39:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:43:14.526 08:39:47 -- common/autotest_common.sh@852 -- # return 0 00:43:14.526 08:39:47 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:43:14.526 08:39:47 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:43:14.783 08:39:47 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:43:14.783 08:39:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:14.783 08:39:47 -- common/autotest_common.sh@10 -- # set +x 00:43:14.783 08:39:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:14.783 08:39:48 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:43:14.783 08:39:48 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:43:15.041 nvme0n1 00:43:15.041 08:39:48 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:43:15.041 08:39:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:15.041 08:39:48 -- common/autotest_common.sh@10 -- # set +x 00:43:15.041 08:39:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:15.041 08:39:48 -- host/digest.sh@69 -- # bperf_py perform_tests 00:43:15.041 08:39:48 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:15.299 Running I/O for 2 seconds... 00:43:15.299 [2024-04-17 08:39:48.459617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.299 [2024-04-17 08:39:48.459675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.299 [2024-04-17 08:39:48.459686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.299 [2024-04-17 08:39:48.471275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.299 [2024-04-17 08:39:48.471324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.299 [2024-04-17 08:39:48.471334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.299 [2024-04-17 08:39:48.481518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.299 [2024-04-17 08:39:48.481564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.299 [2024-04-17 08:39:48.481578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.299 [2024-04-17 08:39:48.494513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.299 [2024-04-17 08:39:48.494584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.299 [2024-04-17 08:39:48.494594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.299 [2024-04-17 08:39:48.528769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.299 [2024-04-17 08:39:48.528834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.299 [2024-04-17 08:39:48.528844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.299 [2024-04-17 08:39:48.542099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.299 [2024-04-17 08:39:48.542148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.299 [2024-04-17 08:39:48.542158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.300 [2024-04-17 08:39:48.555464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.300 [2024-04-17 08:39:48.555515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.300 [2024-04-17 08:39:48.555525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.300 [2024-04-17 08:39:48.568566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.300 [2024-04-17 08:39:48.568615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.300 [2024-04-17 08:39:48.568625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.300 [2024-04-17 08:39:48.579911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.300 [2024-04-17 08:39:48.579954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.300 [2024-04-17 08:39:48.579964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.300 [2024-04-17 08:39:48.590001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.300 [2024-04-17 08:39:48.590042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.300 [2024-04-17 08:39:48.590051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.300 [2024-04-17 08:39:48.603054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.300 [2024-04-17 08:39:48.603100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.300 [2024-04-17 08:39:48.603110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.300 [2024-04-17 08:39:48.616699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.300 [2024-04-17 08:39:48.616744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.300 [2024-04-17 08:39:48.616754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.300 [2024-04-17 08:39:48.626899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.300 [2024-04-17 08:39:48.626941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.300 [2024-04-17 08:39:48.626950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.559 [2024-04-17 08:39:48.637755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.559 [2024-04-17 08:39:48.637791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.560 [2024-04-17 08:39:48.637799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.560 [2024-04-17 08:39:48.648285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.560 [2024-04-17 08:39:48.648324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.560 [2024-04-17 08:39:48.648332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.560 [2024-04-17 08:39:48.659942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.560 [2024-04-17 08:39:48.659999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.560 [2024-04-17 08:39:48.660008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.560 [2024-04-17 08:39:48.668728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.560 [2024-04-17 08:39:48.668767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.560 [2024-04-17 08:39:48.668776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.560 [2024-04-17 08:39:48.680893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.560 [2024-04-17 08:39:48.680931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.560 [2024-04-17 08:39:48.680939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.560 [2024-04-17 08:39:48.692778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.560 [2024-04-17 08:39:48.692814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.560 [2024-04-17 08:39:48.692823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.560 [2024-04-17 08:39:48.702474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.560 [2024-04-17 08:39:48.702542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.560 [2024-04-17 08:39:48.702552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.560 [2024-04-17 08:39:48.712635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.560 [2024-04-17 08:39:48.712674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.560 [2024-04-17 08:39:48.712683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.560 [2024-04-17 08:39:48.722421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.560 [2024-04-17 08:39:48.722456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.560 [2024-04-17 08:39:48.722465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.560 [2024-04-17 08:39:48.732195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.560 [2024-04-17 08:39:48.732235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.560 [2024-04-17 08:39:48.732244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.560 [2024-04-17 08:39:48.742440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.560 [2024-04-17 08:39:48.742480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.560 [2024-04-17 08:39:48.742490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.560 [2024-04-17 08:39:48.752552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.560 [2024-04-17 08:39:48.752605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.560 [2024-04-17 08:39:48.752619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.560 [2024-04-17 08:39:48.766139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.560 [2024-04-17 08:39:48.766187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.560 [2024-04-17 08:39:48.766199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.560 [2024-04-17 08:39:48.777388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.560 [2024-04-17 08:39:48.777442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.560 [2024-04-17 08:39:48.777452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.560 [2024-04-17 08:39:48.791242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.560 [2024-04-17 08:39:48.791290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.560 [2024-04-17 08:39:48.791301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.560 [2024-04-17 08:39:48.803082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.560 [2024-04-17 08:39:48.803126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.560 [2024-04-17 08:39:48.803136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.560 [2024-04-17 08:39:48.815497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.560 [2024-04-17 08:39:48.815535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.560 [2024-04-17 08:39:48.815543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.560 [2024-04-17 08:39:48.830669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.560 [2024-04-17 08:39:48.830714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.560 [2024-04-17 08:39:48.830725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.560 [2024-04-17 08:39:48.841529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.560 [2024-04-17 08:39:48.841569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.560 [2024-04-17 08:39:48.841580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.560 [2024-04-17 08:39:48.853670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.560 [2024-04-17 08:39:48.853718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.560 [2024-04-17 08:39:48.853730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.560 [2024-04-17 08:39:48.868777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.560 [2024-04-17 08:39:48.868831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.560 [2024-04-17 08:39:48.868841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.560 [2024-04-17 08:39:48.883259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.560 [2024-04-17 08:39:48.883328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.560 [2024-04-17 08:39:48.883346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.820 [2024-04-17 08:39:48.898165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.820 [2024-04-17 08:39:48.898215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.820 [2024-04-17 08:39:48.898225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.820 [2024-04-17 08:39:48.913325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.820 [2024-04-17 08:39:48.913375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.820 [2024-04-17 08:39:48.913386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.820 [2024-04-17 08:39:48.927050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.820 [2024-04-17 08:39:48.927094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.820 [2024-04-17 08:39:48.927104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.820 [2024-04-17 08:39:48.940644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.820 [2024-04-17 08:39:48.940689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.820 [2024-04-17 08:39:48.940699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.820 [2024-04-17 08:39:48.954374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.820 [2024-04-17 08:39:48.954435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.820 [2024-04-17 08:39:48.954446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.820 [2024-04-17 08:39:48.966258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.820 [2024-04-17 08:39:48.966297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.820 [2024-04-17 08:39:48.966307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.820 [2024-04-17 08:39:48.979683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.820 [2024-04-17 08:39:48.979724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.820 [2024-04-17 08:39:48.979735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.820 [2024-04-17 08:39:48.990958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.820 [2024-04-17 08:39:48.990996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.820 [2024-04-17 08:39:48.991006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.820 [2024-04-17 08:39:49.000829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.820 [2024-04-17 08:39:49.000864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.820 [2024-04-17 08:39:49.000873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.820 [2024-04-17 08:39:49.013168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.820 [2024-04-17 08:39:49.013202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.820 [2024-04-17 08:39:49.013212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.820 [2024-04-17 08:39:49.022574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.820 [2024-04-17 08:39:49.022608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.820 [2024-04-17 08:39:49.022617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.820 [2024-04-17 08:39:49.033819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.820 [2024-04-17 08:39:49.033851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.820 [2024-04-17 08:39:49.033859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.820 [2024-04-17 08:39:49.043998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.820 [2024-04-17 08:39:49.044036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.820 [2024-04-17 08:39:49.044046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.820 [2024-04-17 08:39:49.055051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.820 [2024-04-17 08:39:49.055089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.820 [2024-04-17 08:39:49.055098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.820 [2024-04-17 08:39:49.066090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.820 [2024-04-17 08:39:49.066132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.820 [2024-04-17 08:39:49.066142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.820 [2024-04-17 08:39:49.081833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.821 [2024-04-17 08:39:49.081876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.821 [2024-04-17 08:39:49.081885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.821 [2024-04-17 08:39:49.095779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.821 [2024-04-17 08:39:49.095835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.821 [2024-04-17 08:39:49.095845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.821 [2024-04-17 08:39:49.107576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.821 [2024-04-17 08:39:49.107629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.821 [2024-04-17 08:39:49.107645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.821 [2024-04-17 08:39:49.121376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.821 [2024-04-17 08:39:49.121436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.821 [2024-04-17 08:39:49.121448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.821 [2024-04-17 08:39:49.132565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.821 [2024-04-17 08:39:49.132601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.821 [2024-04-17 08:39:49.132610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:15.821 [2024-04-17 08:39:49.144768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:15.821 [2024-04-17 08:39:49.144810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:15.821 [2024-04-17 08:39:49.144821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.081 [2024-04-17 08:39:49.157138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.081 [2024-04-17 08:39:49.157179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.081 [2024-04-17 08:39:49.157189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.081 [2024-04-17 08:39:49.169770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.081 [2024-04-17 08:39:49.169804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.081 [2024-04-17 08:39:49.169812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.081 [2024-04-17 08:39:49.180260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.081 [2024-04-17 08:39:49.180309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.081 [2024-04-17 08:39:49.180322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.081 [2024-04-17 08:39:49.191889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.081 [2024-04-17 08:39:49.191928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.081 [2024-04-17 08:39:49.191938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.081 [2024-04-17 08:39:49.203204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.081 [2024-04-17 08:39:49.203245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.081 [2024-04-17 08:39:49.203255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.081 [2024-04-17 08:39:49.214171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.081 [2024-04-17 08:39:49.214222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.081 [2024-04-17 08:39:49.214237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.081 [2024-04-17 08:39:49.226420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.081 [2024-04-17 08:39:49.226461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.081 [2024-04-17 08:39:49.226475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.081 [2024-04-17 08:39:49.239225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.081 [2024-04-17 08:39:49.239263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.081 [2024-04-17 08:39:49.239273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.081 [2024-04-17 08:39:49.252077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.081 [2024-04-17 08:39:49.252115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.081 [2024-04-17 08:39:49.252125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.081 [2024-04-17 08:39:49.265096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.081 [2024-04-17 08:39:49.265129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.081 [2024-04-17 08:39:49.265138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.081 [2024-04-17 08:39:49.275925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.081 [2024-04-17 08:39:49.275972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.081 [2024-04-17 08:39:49.275987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.081 [2024-04-17 08:39:49.287921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.081 [2024-04-17 08:39:49.287972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.081 [2024-04-17 08:39:49.287987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.081 [2024-04-17 08:39:49.302846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.081 [2024-04-17 08:39:49.302883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.081 [2024-04-17 08:39:49.302893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.081 [2024-04-17 08:39:49.315785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.082 [2024-04-17 08:39:49.315820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.082 [2024-04-17 08:39:49.315829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.082 [2024-04-17 08:39:49.328923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.082 [2024-04-17 08:39:49.328961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.082 [2024-04-17 08:39:49.328970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.082 [2024-04-17 08:39:49.343511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.082 [2024-04-17 08:39:49.343562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.082 [2024-04-17 08:39:49.343576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.082 [2024-04-17 08:39:49.359367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.082 [2024-04-17 08:39:49.359453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.082 [2024-04-17 08:39:49.359470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.082 [2024-04-17 08:39:49.375707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.082 [2024-04-17 08:39:49.375769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.082 [2024-04-17 08:39:49.375785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.082 [2024-04-17 08:39:49.391318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.082 [2024-04-17 08:39:49.391372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.082 [2024-04-17 08:39:49.391384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.082 [2024-04-17 08:39:49.405714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.082 [2024-04-17 08:39:49.405797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.082 [2024-04-17 08:39:49.405816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.343 [2024-04-17 08:39:49.420573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.343 [2024-04-17 08:39:49.420644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.343 [2024-04-17 08:39:49.420663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.343 [2024-04-17 08:39:49.435690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.343 [2024-04-17 08:39:49.435747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.343 [2024-04-17 08:39:49.435762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.343 [2024-04-17 08:39:49.450817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.343 [2024-04-17 08:39:49.450903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.343 [2024-04-17 08:39:49.450922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.343 [2024-04-17 08:39:49.463835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.343 [2024-04-17 08:39:49.463907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.343 [2024-04-17 08:39:49.463927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.343 [2024-04-17 08:39:49.478964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.343 [2024-04-17 08:39:49.479014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.343 [2024-04-17 08:39:49.479025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.343 [2024-04-17 08:39:49.493563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.343 [2024-04-17 08:39:49.493616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.343 [2024-04-17 08:39:49.493627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.343 [2024-04-17 08:39:49.508772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.343 [2024-04-17 08:39:49.508821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.343 [2024-04-17 08:39:49.508831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.343 [2024-04-17 08:39:49.521465] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.343 [2024-04-17 08:39:49.521505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.343 [2024-04-17 08:39:49.521515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.343 [2024-04-17 08:39:49.530943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.343 [2024-04-17 08:39:49.530983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.343 [2024-04-17 08:39:49.530994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.343 [2024-04-17 08:39:49.544765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.343 [2024-04-17 08:39:49.544807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.343 [2024-04-17 08:39:49.544817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.343 [2024-04-17 08:39:49.557049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.343 [2024-04-17 08:39:49.557092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.343 [2024-04-17 08:39:49.557103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.343 [2024-04-17 08:39:49.567917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.343 [2024-04-17 08:39:49.567967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.343 [2024-04-17 08:39:49.567979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.343 [2024-04-17 08:39:49.579112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.343 [2024-04-17 08:39:49.579157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.343 [2024-04-17 08:39:49.579170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.343 [2024-04-17 08:39:49.592556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.343 [2024-04-17 08:39:49.592599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.343 [2024-04-17 08:39:49.592610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.343 [2024-04-17 08:39:49.603802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.343 [2024-04-17 08:39:49.603900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.343 [2024-04-17 08:39:49.603956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.343 [2024-04-17 08:39:49.614371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.343 [2024-04-17 08:39:49.614484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.343 [2024-04-17 08:39:49.614523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.343 [2024-04-17 08:39:49.626464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.344 [2024-04-17 08:39:49.626567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.344 [2024-04-17 08:39:49.626606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.344 [2024-04-17 08:39:49.638082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.344 [2024-04-17 08:39:49.638186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.344 [2024-04-17 08:39:49.638243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.344 [2024-04-17 08:39:49.649165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.344 [2024-04-17 08:39:49.649262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.344 [2024-04-17 08:39:49.649300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.344 [2024-04-17 08:39:49.663822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.344 [2024-04-17 08:39:49.663911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.344 [2024-04-17 08:39:49.663944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.344 [2024-04-17 08:39:49.673033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.344 [2024-04-17 08:39:49.673114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.344 [2024-04-17 08:39:49.673147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.604 [2024-04-17 08:39:49.684160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.604 [2024-04-17 08:39:49.684257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.604 [2024-04-17 08:39:49.684290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.604 [2024-04-17 08:39:49.696335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.604 [2024-04-17 08:39:49.696381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.604 [2024-04-17 08:39:49.696389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.604 [2024-04-17 08:39:49.705973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.604 [2024-04-17 08:39:49.706004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.604 [2024-04-17 08:39:49.706012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.604 [2024-04-17 08:39:49.715642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.604 [2024-04-17 08:39:49.715677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.604 [2024-04-17 08:39:49.715685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.604 [2024-04-17 08:39:49.723748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.604 [2024-04-17 08:39:49.723778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.604 [2024-04-17 08:39:49.723785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.604 [2024-04-17 08:39:49.732890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.604 [2024-04-17 08:39:49.732923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.604 [2024-04-17 08:39:49.732931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.604 [2024-04-17 08:39:49.742207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.604 [2024-04-17 08:39:49.742238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.604 [2024-04-17 08:39:49.742246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.604 [2024-04-17 08:39:49.751263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.604 [2024-04-17 08:39:49.751299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.605 [2024-04-17 08:39:49.751307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.605 [2024-04-17 08:39:49.762679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.605 [2024-04-17 08:39:49.762725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.605 [2024-04-17 08:39:49.762735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.605 [2024-04-17 08:39:49.771975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.605 [2024-04-17 08:39:49.772018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.605 [2024-04-17 08:39:49.772028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.605 [2024-04-17 08:39:49.783594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.605 [2024-04-17 08:39:49.783638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.605 [2024-04-17 08:39:49.783648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.605 [2024-04-17 08:39:49.796068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.605 [2024-04-17 08:39:49.796124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.605 [2024-04-17 08:39:49.796134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.605 [2024-04-17 08:39:49.807703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.605 [2024-04-17 08:39:49.807751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.605 [2024-04-17 08:39:49.807761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.605 [2024-04-17 08:39:49.817500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.605 [2024-04-17 08:39:49.817543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.605 [2024-04-17 08:39:49.817553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.605 [2024-04-17 08:39:49.830879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.605 [2024-04-17 08:39:49.830915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.605 [2024-04-17 08:39:49.830924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.605 [2024-04-17 08:39:49.843153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.605 [2024-04-17 08:39:49.843200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.605 [2024-04-17 08:39:49.843210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.605 [2024-04-17 08:39:49.855311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.605 [2024-04-17 08:39:49.855354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.605 [2024-04-17 08:39:49.855363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.605 [2024-04-17 08:39:49.866433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.605 [2024-04-17 08:39:49.866464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.605 [2024-04-17 08:39:49.866472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.605 [2024-04-17 08:39:49.874773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.605 [2024-04-17 08:39:49.874804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.605 [2024-04-17 08:39:49.874812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.605 [2024-04-17 08:39:49.887132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.605 [2024-04-17 08:39:49.887169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.605 [2024-04-17 08:39:49.887178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.605 [2024-04-17 08:39:49.897645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.605 [2024-04-17 08:39:49.897676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.605 [2024-04-17 08:39:49.897684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.605 [2024-04-17 08:39:49.908500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.605 [2024-04-17 08:39:49.908531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.605 [2024-04-17 08:39:49.908539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.605 [2024-04-17 08:39:49.918293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.605 [2024-04-17 08:39:49.918327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.605 [2024-04-17 08:39:49.918336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.605 [2024-04-17 08:39:49.927671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.605 [2024-04-17 08:39:49.927702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.605 [2024-04-17 08:39:49.927710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.865 [2024-04-17 08:39:49.939641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.865 [2024-04-17 08:39:49.939673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.865 [2024-04-17 08:39:49.939681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.865 [2024-04-17 08:39:49.948028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.865 [2024-04-17 08:39:49.948060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.865 [2024-04-17 08:39:49.948068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.865 [2024-04-17 08:39:49.957238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.865 [2024-04-17 08:39:49.957270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.865 [2024-04-17 08:39:49.957278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.865 [2024-04-17 08:39:49.969852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.865 [2024-04-17 08:39:49.969885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.865 [2024-04-17 08:39:49.969893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.865 [2024-04-17 08:39:49.982139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.865 [2024-04-17 08:39:49.982171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.865 [2024-04-17 08:39:49.982179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.865 [2024-04-17 08:39:49.993365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.865 [2024-04-17 08:39:49.993409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.866 [2024-04-17 08:39:49.993417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.866 [2024-04-17 08:39:50.005506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.866 [2024-04-17 08:39:50.005536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.866 [2024-04-17 08:39:50.005543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.866 [2024-04-17 08:39:50.014074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.866 [2024-04-17 08:39:50.014104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.866 [2024-04-17 08:39:50.014112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.866 [2024-04-17 08:39:50.024520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.866 [2024-04-17 08:39:50.024550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.866 [2024-04-17 08:39:50.024558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.866 [2024-04-17 08:39:50.033914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.866 [2024-04-17 08:39:50.033946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.866 [2024-04-17 08:39:50.033961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.866 [2024-04-17 08:39:50.044018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.866 [2024-04-17 08:39:50.044051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.866 [2024-04-17 08:39:50.044059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.866 [2024-04-17 08:39:50.055457] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.866 [2024-04-17 08:39:50.055486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.866 [2024-04-17 08:39:50.055494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.866 [2024-04-17 08:39:50.065449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.866 [2024-04-17 08:39:50.065477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.866 [2024-04-17 08:39:50.065485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.866 [2024-04-17 08:39:50.074499] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.866 [2024-04-17 08:39:50.074531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.866 [2024-04-17 08:39:50.074539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.866 [2024-04-17 08:39:50.086019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.866 [2024-04-17 08:39:50.086049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.866 [2024-04-17 08:39:50.086057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.866 [2024-04-17 08:39:50.100386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.866 [2024-04-17 08:39:50.100427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.866 [2024-04-17 08:39:50.100451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.866 [2024-04-17 08:39:50.109445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.866 [2024-04-17 08:39:50.109475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.866 [2024-04-17 08:39:50.109483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.866 [2024-04-17 08:39:50.121781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.866 [2024-04-17 08:39:50.121811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.866 [2024-04-17 08:39:50.121819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.866 [2024-04-17 08:39:50.133236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.866 [2024-04-17 08:39:50.133267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.866 [2024-04-17 08:39:50.133275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.866 [2024-04-17 08:39:50.145542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.866 [2024-04-17 08:39:50.145569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.866 [2024-04-17 08:39:50.145576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.866 [2024-04-17 08:39:50.156973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.866 [2024-04-17 08:39:50.157005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.866 [2024-04-17 08:39:50.157013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.866 [2024-04-17 08:39:50.169595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.866 [2024-04-17 08:39:50.169630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.866 [2024-04-17 08:39:50.169639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.866 [2024-04-17 08:39:50.182334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.866 [2024-04-17 08:39:50.182369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.866 [2024-04-17 08:39:50.182378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.866 [2024-04-17 08:39:50.193934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:16.866 [2024-04-17 08:39:50.193992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.866 [2024-04-17 08:39:50.194001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.126 [2024-04-17 08:39:50.202647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:17.126 [2024-04-17 08:39:50.202680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.126 [2024-04-17 08:39:50.202689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.126 [2024-04-17 08:39:50.217141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:17.126 [2024-04-17 08:39:50.217172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.126 [2024-04-17 08:39:50.217180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.126 [2024-04-17 08:39:50.226756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:17.126 [2024-04-17 08:39:50.226786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.126 [2024-04-17 08:39:50.226795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.126 [2024-04-17 08:39:50.238666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:17.126 [2024-04-17 08:39:50.238697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.126 [2024-04-17 08:39:50.238704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.126 [2024-04-17 08:39:50.251735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:17.126 [2024-04-17 08:39:50.251767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.126 [2024-04-17 08:39:50.251776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.126 [2024-04-17 08:39:50.262806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:17.126 [2024-04-17 08:39:50.262841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.126 [2024-04-17 08:39:50.262850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.126 [2024-04-17 08:39:50.272913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:17.126 [2024-04-17 08:39:50.272944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.126 [2024-04-17 08:39:50.272952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.126 [2024-04-17 08:39:50.286780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:17.126 [2024-04-17 08:39:50.286813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.126 [2024-04-17 08:39:50.286822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.126 [2024-04-17 08:39:50.296351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:17.126 [2024-04-17 08:39:50.296383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.126 [2024-04-17 08:39:50.296425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.126 [2024-04-17 08:39:50.305941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:17.126 [2024-04-17 08:39:50.305981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.126 [2024-04-17 08:39:50.305989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.126 [2024-04-17 08:39:50.315603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:17.126 [2024-04-17 08:39:50.315633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.126 [2024-04-17 08:39:50.315641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.126 [2024-04-17 08:39:50.324423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:17.126 [2024-04-17 08:39:50.324452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.126 [2024-04-17 08:39:50.324460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.127 [2024-04-17 08:39:50.333797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:17.127 [2024-04-17 08:39:50.333829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.127 [2024-04-17 08:39:50.333836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.127 [2024-04-17 08:39:50.343115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:17.127 [2024-04-17 08:39:50.343159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.127 [2024-04-17 08:39:50.343166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.127 [2024-04-17 08:39:50.351732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:17.127 [2024-04-17 08:39:50.351764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.127 [2024-04-17 08:39:50.351784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.127 [2024-04-17 08:39:50.360239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:17.127 [2024-04-17 08:39:50.360279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.127 [2024-04-17 08:39:50.360287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.127 [2024-04-17 08:39:50.369733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:17.127 [2024-04-17 08:39:50.369763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.127 [2024-04-17 08:39:50.369771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.127 [2024-04-17 08:39:50.380040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:17.127 [2024-04-17 08:39:50.380073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.127 [2024-04-17 08:39:50.380080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.127 [2024-04-17 08:39:50.387756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:17.127 [2024-04-17 08:39:50.387784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.127 [2024-04-17 08:39:50.387792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.127 [2024-04-17 08:39:50.397103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:17.127 [2024-04-17 08:39:50.397138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.127 [2024-04-17 08:39:50.397146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.127 [2024-04-17 08:39:50.407610] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:17.127 [2024-04-17 08:39:50.407643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.127 [2024-04-17 08:39:50.407652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.127 [2024-04-17 08:39:50.419378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:17.127 [2024-04-17 08:39:50.419420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.127 [2024-04-17 08:39:50.419429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.127 [2024-04-17 08:39:50.429049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109c230) 00:43:17.127 [2024-04-17 08:39:50.429083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.127 [2024-04-17 08:39:50.429092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.127 00:43:17.127 Latency(us) 00:43:17.127 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:17.127 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:43:17.127 nvme0n1 : 2.00 21490.40 83.95 0.00 0.00 5950.43 2547.03 30907.81 00:43:17.127 =================================================================================================================== 00:43:17.127 Total : 21490.40 83.95 0.00 0.00 5950.43 2547.03 30907.81 00:43:17.127 0 00:43:17.385 08:39:50 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:43:17.385 08:39:50 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:43:17.385 08:39:50 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:43:17.385 08:39:50 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:43:17.385 | .driver_specific 00:43:17.385 | .nvme_error 00:43:17.385 | .status_code 00:43:17.385 | .command_transient_transport_error' 00:43:17.385 08:39:50 -- host/digest.sh@71 -- # (( 168 > 0 )) 00:43:17.385 08:39:50 -- host/digest.sh@73 -- # killprocess 85040 00:43:17.385 08:39:50 -- common/autotest_common.sh@926 -- # '[' -z 85040 ']' 00:43:17.385 08:39:50 -- common/autotest_common.sh@930 -- # kill -0 85040 00:43:17.385 08:39:50 -- common/autotest_common.sh@931 -- # uname 00:43:17.385 08:39:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:43:17.385 08:39:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85040 00:43:17.385 killing process with pid 85040 00:43:17.385 Received shutdown signal, test time was about 2.000000 seconds 00:43:17.385 00:43:17.385 Latency(us) 00:43:17.385 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:17.385 =================================================================================================================== 00:43:17.385 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:17.385 08:39:50 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:43:17.385 08:39:50 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:43:17.385 08:39:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85040' 00:43:17.385 08:39:50 -- common/autotest_common.sh@945 -- # kill 85040 00:43:17.385 08:39:50 -- common/autotest_common.sh@950 -- # wait 85040 00:43:17.643 08:39:50 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:43:17.643 08:39:50 -- host/digest.sh@54 -- # local rw bs qd 00:43:17.643 08:39:50 -- host/digest.sh@56 -- # rw=randread 00:43:17.643 08:39:50 -- host/digest.sh@56 -- # bs=131072 00:43:17.643 08:39:50 -- host/digest.sh@56 -- # qd=16 00:43:17.643 08:39:50 -- host/digest.sh@58 -- # bperfpid=85130 00:43:17.643 08:39:50 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:43:17.643 08:39:50 -- host/digest.sh@60 -- # waitforlisten 85130 /var/tmp/bperf.sock 00:43:17.643 08:39:50 -- common/autotest_common.sh@819 -- # '[' -z 85130 ']' 00:43:17.643 08:39:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:17.643 08:39:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:43:17.643 08:39:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:17.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:17.643 08:39:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:43:17.643 08:39:50 -- common/autotest_common.sh@10 -- # set +x 00:43:17.643 [2024-04-17 08:39:50.972162] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:43:17.643 [2024-04-17 08:39:50.972331] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85130 ] 00:43:17.643 I/O size of 131072 is greater than zero copy threshold (65536). 00:43:17.643 Zero copy mechanism will not be used. 00:43:17.900 [2024-04-17 08:39:51.111888] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:17.900 [2024-04-17 08:39:51.213549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:43:18.832 08:39:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:43:18.832 08:39:51 -- common/autotest_common.sh@852 -- # return 0 00:43:18.832 08:39:51 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:43:18.832 08:39:51 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:43:18.832 08:39:52 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:43:18.832 08:39:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:18.832 08:39:52 -- common/autotest_common.sh@10 -- # set +x 00:43:18.832 08:39:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:18.832 08:39:52 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:43:18.832 08:39:52 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:43:19.090 nvme0n1 00:43:19.090 08:39:52 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:43:19.090 08:39:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:19.090 08:39:52 -- common/autotest_common.sh@10 -- # set +x 00:43:19.090 08:39:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:19.090 08:39:52 -- host/digest.sh@69 -- # bperf_py perform_tests 00:43:19.090 08:39:52 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:19.351 I/O size of 131072 is greater than zero copy threshold (65536). 00:43:19.351 Zero copy mechanism will not be used. 00:43:19.351 Running I/O for 2 seconds... 00:43:19.351 [2024-04-17 08:39:52.487528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.351 [2024-04-17 08:39:52.487681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.351 [2024-04-17 08:39:52.487725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.351 [2024-04-17 08:39:52.491263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.351 [2024-04-17 08:39:52.491376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.351 [2024-04-17 08:39:52.491443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.351 [2024-04-17 08:39:52.495425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.351 [2024-04-17 08:39:52.495535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.351 [2024-04-17 08:39:52.495578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.351 [2024-04-17 08:39:52.499127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.351 [2024-04-17 08:39:52.499257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.351 [2024-04-17 08:39:52.499310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.351 [2024-04-17 08:39:52.503450] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.351 [2024-04-17 08:39:52.503549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.351 [2024-04-17 08:39:52.503587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.351 [2024-04-17 08:39:52.507179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.351 [2024-04-17 08:39:52.507218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.351 [2024-04-17 08:39:52.507228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.351 [2024-04-17 08:39:52.511375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.351 [2024-04-17 08:39:52.511433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.351 [2024-04-17 08:39:52.511444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.351 [2024-04-17 08:39:52.515557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.351 [2024-04-17 08:39:52.515600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.351 [2024-04-17 08:39:52.515609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.351 [2024-04-17 08:39:52.519470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.351 [2024-04-17 08:39:52.519510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.351 [2024-04-17 08:39:52.519519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.351 [2024-04-17 08:39:52.523748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.351 [2024-04-17 08:39:52.523794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.351 [2024-04-17 08:39:52.523804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.351 [2024-04-17 08:39:52.527092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.351 [2024-04-17 08:39:52.527137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.351 [2024-04-17 08:39:52.527147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.351 [2024-04-17 08:39:52.531197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.351 [2024-04-17 08:39:52.531244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.351 [2024-04-17 08:39:52.531253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.351 [2024-04-17 08:39:52.534830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.351 [2024-04-17 08:39:52.534871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.351 [2024-04-17 08:39:52.534881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.351 [2024-04-17 08:39:52.538937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.351 [2024-04-17 08:39:52.538978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.351 [2024-04-17 08:39:52.538988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.351 [2024-04-17 08:39:52.542614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.351 [2024-04-17 08:39:52.542654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.351 [2024-04-17 08:39:52.542665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.351 [2024-04-17 08:39:52.547307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.351 [2024-04-17 08:39:52.547347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.351 [2024-04-17 08:39:52.547355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.351 [2024-04-17 08:39:52.550903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.351 [2024-04-17 08:39:52.550947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.351 [2024-04-17 08:39:52.550956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.351 [2024-04-17 08:39:52.554586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.351 [2024-04-17 08:39:52.554629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.351 [2024-04-17 08:39:52.554638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.351 [2024-04-17 08:39:52.558283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.351 [2024-04-17 08:39:52.558325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.351 [2024-04-17 08:39:52.558335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.351 [2024-04-17 08:39:52.561621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.351 [2024-04-17 08:39:52.561661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.351 [2024-04-17 08:39:52.561671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.351 [2024-04-17 08:39:52.564802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.351 [2024-04-17 08:39:52.564845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.351 [2024-04-17 08:39:52.564854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.351 [2024-04-17 08:39:52.568133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.351 [2024-04-17 08:39:52.568176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.351 [2024-04-17 08:39:52.568185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.351 [2024-04-17 08:39:52.571399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.351 [2024-04-17 08:39:52.571461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.351 [2024-04-17 08:39:52.571470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.351 [2024-04-17 08:39:52.574481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.352 [2024-04-17 08:39:52.574525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.352 [2024-04-17 08:39:52.574534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.352 [2024-04-17 08:39:52.578200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.352 [2024-04-17 08:39:52.578246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.352 [2024-04-17 08:39:52.578256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.352 [2024-04-17 08:39:52.581847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.352 [2024-04-17 08:39:52.581888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.352 [2024-04-17 08:39:52.581899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.352 [2024-04-17 08:39:52.585828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.352 [2024-04-17 08:39:52.585944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.352 [2024-04-17 08:39:52.585998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.352 [2024-04-17 08:39:52.590054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.352 [2024-04-17 08:39:52.590158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.352 [2024-04-17 08:39:52.590202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.352 [2024-04-17 08:39:52.594207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.352 [2024-04-17 08:39:52.594308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.352 [2024-04-17 08:39:52.594355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.352 [2024-04-17 08:39:52.598818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.352 [2024-04-17 08:39:52.598932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.352 [2024-04-17 08:39:52.598975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.352 [2024-04-17 08:39:52.602316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.352 [2024-04-17 08:39:52.602354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.352 [2024-04-17 08:39:52.602364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.352 [2024-04-17 08:39:52.606294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.352 [2024-04-17 08:39:52.606404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.352 [2024-04-17 08:39:52.606467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.352 [2024-04-17 08:39:52.610803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.352 [2024-04-17 08:39:52.610913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.352 [2024-04-17 08:39:52.611000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.352 [2024-04-17 08:39:52.614228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.352 [2024-04-17 08:39:52.614337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.352 [2024-04-17 08:39:52.614384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.352 [2024-04-17 08:39:52.618112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.352 [2024-04-17 08:39:52.618211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.352 [2024-04-17 08:39:52.618258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.352 [2024-04-17 08:39:52.622145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.352 [2024-04-17 08:39:52.622251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.352 [2024-04-17 08:39:52.622298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.352 [2024-04-17 08:39:52.625848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.352 [2024-04-17 08:39:52.625943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.352 [2024-04-17 08:39:52.626039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.352 [2024-04-17 08:39:52.630502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.352 [2024-04-17 08:39:52.630612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.352 [2024-04-17 08:39:52.630686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.352 [2024-04-17 08:39:52.633759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.352 [2024-04-17 08:39:52.633860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.352 [2024-04-17 08:39:52.633909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.352 [2024-04-17 08:39:52.637782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.352 [2024-04-17 08:39:52.637880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.352 [2024-04-17 08:39:52.637972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.352 [2024-04-17 08:39:52.641165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.352 [2024-04-17 08:39:52.641265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.352 [2024-04-17 08:39:52.641326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.352 [2024-04-17 08:39:52.644554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.352 [2024-04-17 08:39:52.644650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.352 [2024-04-17 08:39:52.644709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.352 [2024-04-17 08:39:52.648507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.352 [2024-04-17 08:39:52.648603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.352 [2024-04-17 08:39:52.648642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.352 [2024-04-17 08:39:52.652816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.352 [2024-04-17 08:39:52.652916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.352 [2024-04-17 08:39:52.652955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.352 [2024-04-17 08:39:52.656630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.352 [2024-04-17 08:39:52.656744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.352 [2024-04-17 08:39:52.656795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.352 [2024-04-17 08:39:52.661040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.352 [2024-04-17 08:39:52.661143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.352 [2024-04-17 08:39:52.661183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.352 [2024-04-17 08:39:52.665202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.352 [2024-04-17 08:39:52.665308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.352 [2024-04-17 08:39:52.665348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.352 [2024-04-17 08:39:52.668563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.352 [2024-04-17 08:39:52.668675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.352 [2024-04-17 08:39:52.668715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.352 [2024-04-17 08:39:52.671914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.352 [2024-04-17 08:39:52.672012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.352 [2024-04-17 08:39:52.672052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.352 [2024-04-17 08:39:52.675900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.352 [2024-04-17 08:39:52.675997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.352 [2024-04-17 08:39:52.676036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.353 [2024-04-17 08:39:52.679851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.353 [2024-04-17 08:39:52.679950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.353 [2024-04-17 08:39:52.679987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.613 [2024-04-17 08:39:52.683733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.613 [2024-04-17 08:39:52.683829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.613 [2024-04-17 08:39:52.683867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.613 [2024-04-17 08:39:52.686601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.613 [2024-04-17 08:39:52.686694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.613 [2024-04-17 08:39:52.686733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.613 [2024-04-17 08:39:52.690328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.613 [2024-04-17 08:39:52.690436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.613 [2024-04-17 08:39:52.690491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.613 [2024-04-17 08:39:52.694277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.613 [2024-04-17 08:39:52.694374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.613 [2024-04-17 08:39:52.694438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.613 [2024-04-17 08:39:52.698457] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.613 [2024-04-17 08:39:52.698556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.613 [2024-04-17 08:39:52.698586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.613 [2024-04-17 08:39:52.701342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.613 [2024-04-17 08:39:52.701382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.613 [2024-04-17 08:39:52.701402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.613 [2024-04-17 08:39:52.705135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.613 [2024-04-17 08:39:52.705172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.613 [2024-04-17 08:39:52.705181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.613 [2024-04-17 08:39:52.708519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.613 [2024-04-17 08:39:52.708555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.613 [2024-04-17 08:39:52.708564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.613 [2024-04-17 08:39:52.712483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.613 [2024-04-17 08:39:52.712525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.613 [2024-04-17 08:39:52.712535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.613 [2024-04-17 08:39:52.715608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.613 [2024-04-17 08:39:52.715663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.613 [2024-04-17 08:39:52.715672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.613 [2024-04-17 08:39:52.718908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.613 [2024-04-17 08:39:52.718950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.613 [2024-04-17 08:39:52.718960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.613 [2024-04-17 08:39:52.722801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.613 [2024-04-17 08:39:52.722841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.613 [2024-04-17 08:39:52.722850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.613 [2024-04-17 08:39:52.726613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.613 [2024-04-17 08:39:52.726652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.613 [2024-04-17 08:39:52.726661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.613 [2024-04-17 08:39:52.730190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.614 [2024-04-17 08:39:52.730230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.614 [2024-04-17 08:39:52.730239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.614 [2024-04-17 08:39:52.734283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.614 [2024-04-17 08:39:52.734322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.614 [2024-04-17 08:39:52.734331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.614 [2024-04-17 08:39:52.738334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.614 [2024-04-17 08:39:52.738376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.614 [2024-04-17 08:39:52.738384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.614 [2024-04-17 08:39:52.741945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.614 [2024-04-17 08:39:52.741997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.614 [2024-04-17 08:39:52.742006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.614 [2024-04-17 08:39:52.745862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.614 [2024-04-17 08:39:52.745904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.614 [2024-04-17 08:39:52.745912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.614 [2024-04-17 08:39:52.749166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.614 [2024-04-17 08:39:52.749207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.614 [2024-04-17 08:39:52.749218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.614 [2024-04-17 08:39:52.752532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.614 [2024-04-17 08:39:52.752572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.614 [2024-04-17 08:39:52.752580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.614 [2024-04-17 08:39:52.756611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.614 [2024-04-17 08:39:52.756655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.614 [2024-04-17 08:39:52.756665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.614 [2024-04-17 08:39:52.760030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.614 [2024-04-17 08:39:52.760073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.614 [2024-04-17 08:39:52.760082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.614 [2024-04-17 08:39:52.763651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.614 [2024-04-17 08:39:52.763694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.614 [2024-04-17 08:39:52.763703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.614 [2024-04-17 08:39:52.766973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.614 [2024-04-17 08:39:52.767014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.614 [2024-04-17 08:39:52.767024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.614 [2024-04-17 08:39:52.770513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.614 [2024-04-17 08:39:52.770620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.614 [2024-04-17 08:39:52.770660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.614 [2024-04-17 08:39:52.774795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.614 [2024-04-17 08:39:52.774896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.614 [2024-04-17 08:39:52.774941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.614 [2024-04-17 08:39:52.778816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.614 [2024-04-17 08:39:52.778917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.614 [2024-04-17 08:39:52.778954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.614 [2024-04-17 08:39:52.782474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.614 [2024-04-17 08:39:52.782556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.614 [2024-04-17 08:39:52.782566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.614 [2024-04-17 08:39:52.786609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.614 [2024-04-17 08:39:52.786704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.614 [2024-04-17 08:39:52.786741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.614 [2024-04-17 08:39:52.789964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.614 [2024-04-17 08:39:52.790055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.614 [2024-04-17 08:39:52.790095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.614 [2024-04-17 08:39:52.793681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.614 [2024-04-17 08:39:52.793766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.614 [2024-04-17 08:39:52.793823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.614 [2024-04-17 08:39:52.797588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.614 [2024-04-17 08:39:52.797690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.614 [2024-04-17 08:39:52.797728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.614 [2024-04-17 08:39:52.800639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.614 [2024-04-17 08:39:52.800724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.614 [2024-04-17 08:39:52.800767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.614 [2024-04-17 08:39:52.804164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.614 [2024-04-17 08:39:52.804266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.614 [2024-04-17 08:39:52.804304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.614 [2024-04-17 08:39:52.807177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.614 [2024-04-17 08:39:52.807261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.614 [2024-04-17 08:39:52.807295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.614 [2024-04-17 08:39:52.811006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.614 [2024-04-17 08:39:52.811096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.614 [2024-04-17 08:39:52.811147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.614 [2024-04-17 08:39:52.814939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.614 [2024-04-17 08:39:52.814980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.614 [2024-04-17 08:39:52.814989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.614 [2024-04-17 08:39:52.818618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.614 [2024-04-17 08:39:52.818658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.614 [2024-04-17 08:39:52.818667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.614 [2024-04-17 08:39:52.822481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.614 [2024-04-17 08:39:52.822518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.614 [2024-04-17 08:39:52.822527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.614 [2024-04-17 08:39:52.826082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.614 [2024-04-17 08:39:52.826119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.614 [2024-04-17 08:39:52.826128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.614 [2024-04-17 08:39:52.829381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.615 [2024-04-17 08:39:52.829437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.615 [2024-04-17 08:39:52.829446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.615 [2024-04-17 08:39:52.833644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.615 [2024-04-17 08:39:52.833693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.615 [2024-04-17 08:39:52.833702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.615 [2024-04-17 08:39:52.837570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.615 [2024-04-17 08:39:52.837605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.615 [2024-04-17 08:39:52.837614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.615 [2024-04-17 08:39:52.841547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.615 [2024-04-17 08:39:52.841585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.615 [2024-04-17 08:39:52.841595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.615 [2024-04-17 08:39:52.845861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.615 [2024-04-17 08:39:52.845904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.615 [2024-04-17 08:39:52.845913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.615 [2024-04-17 08:39:52.849325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.615 [2024-04-17 08:39:52.849360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.615 [2024-04-17 08:39:52.849369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.615 [2024-04-17 08:39:52.853291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.615 [2024-04-17 08:39:52.853328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.615 [2024-04-17 08:39:52.853337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.615 [2024-04-17 08:39:52.857553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.615 [2024-04-17 08:39:52.857589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.615 [2024-04-17 08:39:52.857598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.615 [2024-04-17 08:39:52.861930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.615 [2024-04-17 08:39:52.861976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.615 [2024-04-17 08:39:52.861986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.615 [2024-04-17 08:39:52.866234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.615 [2024-04-17 08:39:52.866279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.615 [2024-04-17 08:39:52.866292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.615 [2024-04-17 08:39:52.870603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.615 [2024-04-17 08:39:52.870642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.615 [2024-04-17 08:39:52.870652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.615 [2024-04-17 08:39:52.874782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.615 [2024-04-17 08:39:52.874821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.615 [2024-04-17 08:39:52.874831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.615 [2024-04-17 08:39:52.879199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.615 [2024-04-17 08:39:52.879244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.615 [2024-04-17 08:39:52.879253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.615 [2024-04-17 08:39:52.883179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.615 [2024-04-17 08:39:52.883222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.615 [2024-04-17 08:39:52.883231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.615 [2024-04-17 08:39:52.887418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.615 [2024-04-17 08:39:52.887471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.615 [2024-04-17 08:39:52.887480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.615 [2024-04-17 08:39:52.891619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.615 [2024-04-17 08:39:52.891657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.615 [2024-04-17 08:39:52.891666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.615 [2024-04-17 08:39:52.894840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.615 [2024-04-17 08:39:52.894876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.615 [2024-04-17 08:39:52.894885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.615 [2024-04-17 08:39:52.898886] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.615 [2024-04-17 08:39:52.898925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.615 [2024-04-17 08:39:52.898934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.615 [2024-04-17 08:39:52.902765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.615 [2024-04-17 08:39:52.902802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.615 [2024-04-17 08:39:52.902810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.615 [2024-04-17 08:39:52.906780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.615 [2024-04-17 08:39:52.906817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.615 [2024-04-17 08:39:52.906826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.615 [2024-04-17 08:39:52.910592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.615 [2024-04-17 08:39:52.910627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.615 [2024-04-17 08:39:52.910636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.615 [2024-04-17 08:39:52.914171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.615 [2024-04-17 08:39:52.914204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.615 [2024-04-17 08:39:52.914213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.615 [2024-04-17 08:39:52.918089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.615 [2024-04-17 08:39:52.918120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.615 [2024-04-17 08:39:52.918128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.615 [2024-04-17 08:39:52.921842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.615 [2024-04-17 08:39:52.921875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.615 [2024-04-17 08:39:52.921883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.615 [2024-04-17 08:39:52.925847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.615 [2024-04-17 08:39:52.925880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.615 [2024-04-17 08:39:52.925888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.615 [2024-04-17 08:39:52.929730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.615 [2024-04-17 08:39:52.929764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.615 [2024-04-17 08:39:52.929772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.615 [2024-04-17 08:39:52.933508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.615 [2024-04-17 08:39:52.933541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.615 [2024-04-17 08:39:52.933550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.615 [2024-04-17 08:39:52.937132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.615 [2024-04-17 08:39:52.937169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.616 [2024-04-17 08:39:52.937178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.616 [2024-04-17 08:39:52.940528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.616 [2024-04-17 08:39:52.940564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.616 [2024-04-17 08:39:52.940572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.876 [2024-04-17 08:39:52.944852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.876 [2024-04-17 08:39:52.944894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.876 [2024-04-17 08:39:52.944903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.876 [2024-04-17 08:39:52.948274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.876 [2024-04-17 08:39:52.948309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.876 [2024-04-17 08:39:52.948334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.877 [2024-04-17 08:39:52.952093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.877 [2024-04-17 08:39:52.952133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.877 [2024-04-17 08:39:52.952142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.877 [2024-04-17 08:39:52.955523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.877 [2024-04-17 08:39:52.955556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.877 [2024-04-17 08:39:52.955564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.877 [2024-04-17 08:39:52.958864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.877 [2024-04-17 08:39:52.958895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.877 [2024-04-17 08:39:52.958904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.877 [2024-04-17 08:39:52.962804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.877 [2024-04-17 08:39:52.962838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.877 [2024-04-17 08:39:52.962847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.877 [2024-04-17 08:39:52.966278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.877 [2024-04-17 08:39:52.966311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.877 [2024-04-17 08:39:52.966319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.877 [2024-04-17 08:39:52.969774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.877 [2024-04-17 08:39:52.969804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.877 [2024-04-17 08:39:52.969813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.877 [2024-04-17 08:39:52.972870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.877 [2024-04-17 08:39:52.972899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.877 [2024-04-17 08:39:52.972906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.877 [2024-04-17 08:39:52.975814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.877 [2024-04-17 08:39:52.975844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.877 [2024-04-17 08:39:52.975852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.877 [2024-04-17 08:39:52.979646] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.877 [2024-04-17 08:39:52.979679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.877 [2024-04-17 08:39:52.979688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.877 [2024-04-17 08:39:52.983554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.877 [2024-04-17 08:39:52.983585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.877 [2024-04-17 08:39:52.983593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.877 [2024-04-17 08:39:52.987322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.877 [2024-04-17 08:39:52.987358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.877 [2024-04-17 08:39:52.987367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.877 [2024-04-17 08:39:52.991542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.877 [2024-04-17 08:39:52.991574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.877 [2024-04-17 08:39:52.991582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.877 [2024-04-17 08:39:52.995790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.877 [2024-04-17 08:39:52.995827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.877 [2024-04-17 08:39:52.995835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.877 [2024-04-17 08:39:52.999532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.877 [2024-04-17 08:39:52.999563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.877 [2024-04-17 08:39:52.999571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.877 [2024-04-17 08:39:53.003448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.877 [2024-04-17 08:39:53.003480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.877 [2024-04-17 08:39:53.003489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.877 [2024-04-17 08:39:53.007182] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.877 [2024-04-17 08:39:53.007217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.877 [2024-04-17 08:39:53.007225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.877 [2024-04-17 08:39:53.010929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.877 [2024-04-17 08:39:53.010961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.877 [2024-04-17 08:39:53.010969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.877 [2024-04-17 08:39:53.014608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.877 [2024-04-17 08:39:53.014639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.877 [2024-04-17 08:39:53.014647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.877 [2024-04-17 08:39:53.018138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.877 [2024-04-17 08:39:53.018169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.877 [2024-04-17 08:39:53.018177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.877 [2024-04-17 08:39:53.021801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.877 [2024-04-17 08:39:53.021828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.877 [2024-04-17 08:39:53.021836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.877 [2024-04-17 08:39:53.025517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.877 [2024-04-17 08:39:53.025547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.877 [2024-04-17 08:39:53.025555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.877 [2024-04-17 08:39:53.028812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.877 [2024-04-17 08:39:53.028841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.877 [2024-04-17 08:39:53.028849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.877 [2024-04-17 08:39:53.032294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.877 [2024-04-17 08:39:53.032326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.877 [2024-04-17 08:39:53.032333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.877 [2024-04-17 08:39:53.036057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.877 [2024-04-17 08:39:53.036088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.877 [2024-04-17 08:39:53.036095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.877 [2024-04-17 08:39:53.040072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.877 [2024-04-17 08:39:53.040106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.877 [2024-04-17 08:39:53.040115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.877 [2024-04-17 08:39:53.043905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.877 [2024-04-17 08:39:53.043940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.877 [2024-04-17 08:39:53.043949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.877 [2024-04-17 08:39:53.047995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.877 [2024-04-17 08:39:53.048030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.877 [2024-04-17 08:39:53.048039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.877 [2024-04-17 08:39:53.052160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.878 [2024-04-17 08:39:53.052193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.878 [2024-04-17 08:39:53.052202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.878 [2024-04-17 08:39:53.055877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.878 [2024-04-17 08:39:53.055911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.878 [2024-04-17 08:39:53.055920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.878 [2024-04-17 08:39:53.059992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.878 [2024-04-17 08:39:53.060033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.878 [2024-04-17 08:39:53.060042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.878 [2024-04-17 08:39:53.063142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.878 [2024-04-17 08:39:53.063177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.878 [2024-04-17 08:39:53.063186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.878 [2024-04-17 08:39:53.066323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.878 [2024-04-17 08:39:53.066360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.878 [2024-04-17 08:39:53.066369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.878 [2024-04-17 08:39:53.070066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.878 [2024-04-17 08:39:53.070099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.878 [2024-04-17 08:39:53.070108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.878 [2024-04-17 08:39:53.073231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.878 [2024-04-17 08:39:53.073262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.878 [2024-04-17 08:39:53.073270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.878 [2024-04-17 08:39:53.076882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.878 [2024-04-17 08:39:53.076915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.878 [2024-04-17 08:39:53.076924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.878 [2024-04-17 08:39:53.080209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.878 [2024-04-17 08:39:53.080242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.878 [2024-04-17 08:39:53.080251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.878 [2024-04-17 08:39:53.083735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.878 [2024-04-17 08:39:53.083770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.878 [2024-04-17 08:39:53.083779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.878 [2024-04-17 08:39:53.087017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.878 [2024-04-17 08:39:53.087053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.878 [2024-04-17 08:39:53.087062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.878 [2024-04-17 08:39:53.090439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.878 [2024-04-17 08:39:53.090472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.878 [2024-04-17 08:39:53.090481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.878 [2024-04-17 08:39:53.094500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.878 [2024-04-17 08:39:53.094534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.878 [2024-04-17 08:39:53.094543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.878 [2024-04-17 08:39:53.098282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.878 [2024-04-17 08:39:53.098317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.878 [2024-04-17 08:39:53.098326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.878 [2024-04-17 08:39:53.102281] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.878 [2024-04-17 08:39:53.102315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.878 [2024-04-17 08:39:53.102324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.878 [2024-04-17 08:39:53.106354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.878 [2024-04-17 08:39:53.106389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.878 [2024-04-17 08:39:53.106411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.878 [2024-04-17 08:39:53.110680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.878 [2024-04-17 08:39:53.110714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.878 [2024-04-17 08:39:53.110723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.878 [2024-04-17 08:39:53.114707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.878 [2024-04-17 08:39:53.114742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.878 [2024-04-17 08:39:53.114750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.878 [2024-04-17 08:39:53.118733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.878 [2024-04-17 08:39:53.118767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.878 [2024-04-17 08:39:53.118796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.878 [2024-04-17 08:39:53.122956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.878 [2024-04-17 08:39:53.122992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.878 [2024-04-17 08:39:53.123001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.878 [2024-04-17 08:39:53.126829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.878 [2024-04-17 08:39:53.126863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.878 [2024-04-17 08:39:53.126872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.878 [2024-04-17 08:39:53.130597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.878 [2024-04-17 08:39:53.130629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.878 [2024-04-17 08:39:53.130638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.878 [2024-04-17 08:39:53.134740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.878 [2024-04-17 08:39:53.134773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.878 [2024-04-17 08:39:53.134782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.878 [2024-04-17 08:39:53.138621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.878 [2024-04-17 08:39:53.138654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.878 [2024-04-17 08:39:53.138662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.878 [2024-04-17 08:39:53.142508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.878 [2024-04-17 08:39:53.142540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.878 [2024-04-17 08:39:53.142548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.878 [2024-04-17 08:39:53.146454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.878 [2024-04-17 08:39:53.146485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.878 [2024-04-17 08:39:53.146494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.878 [2024-04-17 08:39:53.150482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.878 [2024-04-17 08:39:53.150513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.878 [2024-04-17 08:39:53.150522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.878 [2024-04-17 08:39:53.154306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.878 [2024-04-17 08:39:53.154339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.879 [2024-04-17 08:39:53.154348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.879 [2024-04-17 08:39:53.158446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.879 [2024-04-17 08:39:53.158474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.879 [2024-04-17 08:39:53.158483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.879 [2024-04-17 08:39:53.162465] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.879 [2024-04-17 08:39:53.162498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.879 [2024-04-17 08:39:53.162507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.879 [2024-04-17 08:39:53.166309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.879 [2024-04-17 08:39:53.166342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.879 [2024-04-17 08:39:53.166351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.879 [2024-04-17 08:39:53.169891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.879 [2024-04-17 08:39:53.169922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.879 [2024-04-17 08:39:53.169931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.879 [2024-04-17 08:39:53.173655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.879 [2024-04-17 08:39:53.173686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.879 [2024-04-17 08:39:53.173695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.879 [2024-04-17 08:39:53.177544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.879 [2024-04-17 08:39:53.177573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.879 [2024-04-17 08:39:53.177581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.879 [2024-04-17 08:39:53.181642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.879 [2024-04-17 08:39:53.181676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.879 [2024-04-17 08:39:53.181685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.879 [2024-04-17 08:39:53.185899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.879 [2024-04-17 08:39:53.185932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.879 [2024-04-17 08:39:53.185940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.879 [2024-04-17 08:39:53.189965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.879 [2024-04-17 08:39:53.189999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.879 [2024-04-17 08:39:53.190008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:19.879 [2024-04-17 08:39:53.193716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.879 [2024-04-17 08:39:53.193748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.879 [2024-04-17 08:39:53.193757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:19.879 [2024-04-17 08:39:53.197671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.879 [2024-04-17 08:39:53.197703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.879 [2024-04-17 08:39:53.197711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:19.879 [2024-04-17 08:39:53.201447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.879 [2024-04-17 08:39:53.201477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.879 [2024-04-17 08:39:53.201485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:19.879 [2024-04-17 08:39:53.205276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:19.879 [2024-04-17 08:39:53.205310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:19.879 [2024-04-17 08:39:53.205318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.139 [2024-04-17 08:39:53.209059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.140 [2024-04-17 08:39:53.209090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.140 [2024-04-17 08:39:53.209098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.140 [2024-04-17 08:39:53.213077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.140 [2024-04-17 08:39:53.213111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.140 [2024-04-17 08:39:53.213120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.140 [2024-04-17 08:39:53.217242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.140 [2024-04-17 08:39:53.217277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.140 [2024-04-17 08:39:53.217287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.140 [2024-04-17 08:39:53.221303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.140 [2024-04-17 08:39:53.221345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.140 [2024-04-17 08:39:53.221356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.140 [2024-04-17 08:39:53.225533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.140 [2024-04-17 08:39:53.225566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.140 [2024-04-17 08:39:53.225575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.140 [2024-04-17 08:39:53.229802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.140 [2024-04-17 08:39:53.229836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.140 [2024-04-17 08:39:53.229844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.140 [2024-04-17 08:39:53.233863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.140 [2024-04-17 08:39:53.233895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.140 [2024-04-17 08:39:53.233903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.140 [2024-04-17 08:39:53.237880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.140 [2024-04-17 08:39:53.237913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.140 [2024-04-17 08:39:53.237922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.140 [2024-04-17 08:39:53.241813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.140 [2024-04-17 08:39:53.241845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.140 [2024-04-17 08:39:53.241853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.140 [2024-04-17 08:39:53.245940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.140 [2024-04-17 08:39:53.245980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.140 [2024-04-17 08:39:53.245988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.140 [2024-04-17 08:39:53.249765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.140 [2024-04-17 08:39:53.249796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.140 [2024-04-17 08:39:53.249805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.140 [2024-04-17 08:39:53.253724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.140 [2024-04-17 08:39:53.253756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.140 [2024-04-17 08:39:53.253765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.140 [2024-04-17 08:39:53.257657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.140 [2024-04-17 08:39:53.257688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.140 [2024-04-17 08:39:53.257696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.140 [2024-04-17 08:39:53.261622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.140 [2024-04-17 08:39:53.261653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.140 [2024-04-17 08:39:53.261662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.140 [2024-04-17 08:39:53.265725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.140 [2024-04-17 08:39:53.265754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.140 [2024-04-17 08:39:53.265761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.140 [2024-04-17 08:39:53.269638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.140 [2024-04-17 08:39:53.269669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.140 [2024-04-17 08:39:53.269677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.140 [2024-04-17 08:39:53.273602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.140 [2024-04-17 08:39:53.273633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.140 [2024-04-17 08:39:53.273641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.140 [2024-04-17 08:39:53.277538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.140 [2024-04-17 08:39:53.277569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.140 [2024-04-17 08:39:53.277578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.140 [2024-04-17 08:39:53.281577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.140 [2024-04-17 08:39:53.281608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.140 [2024-04-17 08:39:53.281616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.140 [2024-04-17 08:39:53.285498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.140 [2024-04-17 08:39:53.285532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.140 [2024-04-17 08:39:53.285541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.140 [2024-04-17 08:39:53.289065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.140 [2024-04-17 08:39:53.289097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.140 [2024-04-17 08:39:53.289105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.140 [2024-04-17 08:39:53.293083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.140 [2024-04-17 08:39:53.293119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.140 [2024-04-17 08:39:53.293128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.140 [2024-04-17 08:39:53.296952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.140 [2024-04-17 08:39:53.296988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.140 [2024-04-17 08:39:53.296997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.140 [2024-04-17 08:39:53.300907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.140 [2024-04-17 08:39:53.300941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.140 [2024-04-17 08:39:53.300950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.140 [2024-04-17 08:39:53.304741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.140 [2024-04-17 08:39:53.304774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.140 [2024-04-17 08:39:53.304783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.140 [2024-04-17 08:39:53.308584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.140 [2024-04-17 08:39:53.308618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.140 [2024-04-17 08:39:53.308626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.140 [2024-04-17 08:39:53.312591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.140 [2024-04-17 08:39:53.312618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.141 [2024-04-17 08:39:53.312627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.141 [2024-04-17 08:39:53.316606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.141 [2024-04-17 08:39:53.316638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.141 [2024-04-17 08:39:53.316647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.141 [2024-04-17 08:39:53.320357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.141 [2024-04-17 08:39:53.320390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.141 [2024-04-17 08:39:53.320409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.141 [2024-04-17 08:39:53.324228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.141 [2024-04-17 08:39:53.324260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.141 [2024-04-17 08:39:53.324268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.141 [2024-04-17 08:39:53.328050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.141 [2024-04-17 08:39:53.328084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.141 [2024-04-17 08:39:53.328092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.141 [2024-04-17 08:39:53.332042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.141 [2024-04-17 08:39:53.332075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.141 [2024-04-17 08:39:53.332083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.141 [2024-04-17 08:39:53.335912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.141 [2024-04-17 08:39:53.335942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.141 [2024-04-17 08:39:53.335949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.141 [2024-04-17 08:39:53.339717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.141 [2024-04-17 08:39:53.339745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.141 [2024-04-17 08:39:53.339753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.141 [2024-04-17 08:39:53.343428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.141 [2024-04-17 08:39:53.343459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.141 [2024-04-17 08:39:53.343467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.141 [2024-04-17 08:39:53.347072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.141 [2024-04-17 08:39:53.347102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.141 [2024-04-17 08:39:53.347111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.141 [2024-04-17 08:39:53.350371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.141 [2024-04-17 08:39:53.350410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.141 [2024-04-17 08:39:53.350419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.141 [2024-04-17 08:39:53.354115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.141 [2024-04-17 08:39:53.354147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.141 [2024-04-17 08:39:53.354155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.141 [2024-04-17 08:39:53.358212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.141 [2024-04-17 08:39:53.358244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.141 [2024-04-17 08:39:53.358253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.141 [2024-04-17 08:39:53.362203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.141 [2024-04-17 08:39:53.362235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.141 [2024-04-17 08:39:53.362244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.141 [2024-04-17 08:39:53.366019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.141 [2024-04-17 08:39:53.366049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.141 [2024-04-17 08:39:53.366058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.141 [2024-04-17 08:39:53.370110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.141 [2024-04-17 08:39:53.370141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.141 [2024-04-17 08:39:53.370150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.141 [2024-04-17 08:39:53.373896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.141 [2024-04-17 08:39:53.373926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.141 [2024-04-17 08:39:53.373933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.141 [2024-04-17 08:39:53.377627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.141 [2024-04-17 08:39:53.377655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.141 [2024-04-17 08:39:53.377662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.141 [2024-04-17 08:39:53.381977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.141 [2024-04-17 08:39:53.382024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.141 [2024-04-17 08:39:53.382033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.141 [2024-04-17 08:39:53.385531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.141 [2024-04-17 08:39:53.385559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.141 [2024-04-17 08:39:53.385567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.141 [2024-04-17 08:39:53.389349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.141 [2024-04-17 08:39:53.389380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.141 [2024-04-17 08:39:53.389388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.141 [2024-04-17 08:39:53.393211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.141 [2024-04-17 08:39:53.393242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.141 [2024-04-17 08:39:53.393250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.141 [2024-04-17 08:39:53.397142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.141 [2024-04-17 08:39:53.397175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.141 [2024-04-17 08:39:53.397183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.141 [2024-04-17 08:39:53.401097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.141 [2024-04-17 08:39:53.401130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.141 [2024-04-17 08:39:53.401138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.141 [2024-04-17 08:39:53.405089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.141 [2024-04-17 08:39:53.405121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.141 [2024-04-17 08:39:53.405130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.141 [2024-04-17 08:39:53.409003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.141 [2024-04-17 08:39:53.409036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.141 [2024-04-17 08:39:53.409044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.141 [2024-04-17 08:39:53.413210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.141 [2024-04-17 08:39:53.413248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.141 [2024-04-17 08:39:53.413258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.141 [2024-04-17 08:39:53.417110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.141 [2024-04-17 08:39:53.417146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.142 [2024-04-17 08:39:53.417154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.142 [2024-04-17 08:39:53.421178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.142 [2024-04-17 08:39:53.421224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.142 [2024-04-17 08:39:53.421233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.142 [2024-04-17 08:39:53.424689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.142 [2024-04-17 08:39:53.424729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.142 [2024-04-17 08:39:53.424738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.142 [2024-04-17 08:39:53.428091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.142 [2024-04-17 08:39:53.428126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.142 [2024-04-17 08:39:53.428133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.142 [2024-04-17 08:39:53.431904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.142 [2024-04-17 08:39:53.431940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.142 [2024-04-17 08:39:53.431948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.142 [2024-04-17 08:39:53.435131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.142 [2024-04-17 08:39:53.435164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.142 [2024-04-17 08:39:53.435172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.142 [2024-04-17 08:39:53.439306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.142 [2024-04-17 08:39:53.439343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.142 [2024-04-17 08:39:53.439350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.142 [2024-04-17 08:39:53.442434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.142 [2024-04-17 08:39:53.442467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.142 [2024-04-17 08:39:53.442475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.142 [2024-04-17 08:39:53.446460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.142 [2024-04-17 08:39:53.446494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.142 [2024-04-17 08:39:53.446504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.142 [2024-04-17 08:39:53.449796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.142 [2024-04-17 08:39:53.449826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.142 [2024-04-17 08:39:53.449834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.142 [2024-04-17 08:39:53.453625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.142 [2024-04-17 08:39:53.453656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.142 [2024-04-17 08:39:53.453663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.142 [2024-04-17 08:39:53.457560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.142 [2024-04-17 08:39:53.457589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.142 [2024-04-17 08:39:53.457596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.142 [2024-04-17 08:39:53.461303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.142 [2024-04-17 08:39:53.461335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.142 [2024-04-17 08:39:53.461342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.142 [2024-04-17 08:39:53.464497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.142 [2024-04-17 08:39:53.464531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.142 [2024-04-17 08:39:53.464539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.142 [2024-04-17 08:39:53.467878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.142 [2024-04-17 08:39:53.467913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.142 [2024-04-17 08:39:53.467921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.404 [2024-04-17 08:39:53.471513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.404 [2024-04-17 08:39:53.471548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.404 [2024-04-17 08:39:53.471557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.404 [2024-04-17 08:39:53.475352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.404 [2024-04-17 08:39:53.475390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.404 [2024-04-17 08:39:53.475425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.404 [2024-04-17 08:39:53.478398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.404 [2024-04-17 08:39:53.478442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.404 [2024-04-17 08:39:53.478451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.404 [2024-04-17 08:39:53.482213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.404 [2024-04-17 08:39:53.482246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.404 [2024-04-17 08:39:53.482254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.404 [2024-04-17 08:39:53.485971] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.404 [2024-04-17 08:39:53.486002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.404 [2024-04-17 08:39:53.486010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.404 [2024-04-17 08:39:53.490203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.404 [2024-04-17 08:39:53.490241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.404 [2024-04-17 08:39:53.490250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.404 [2024-04-17 08:39:53.493338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.404 [2024-04-17 08:39:53.493373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.404 [2024-04-17 08:39:53.493382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.404 [2024-04-17 08:39:53.496870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.404 [2024-04-17 08:39:53.496907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.404 [2024-04-17 08:39:53.496915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.404 [2024-04-17 08:39:53.500397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.404 [2024-04-17 08:39:53.500448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.404 [2024-04-17 08:39:53.500457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.404 [2024-04-17 08:39:53.504535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.404 [2024-04-17 08:39:53.504575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.404 [2024-04-17 08:39:53.504584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.404 [2024-04-17 08:39:53.508036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.404 [2024-04-17 08:39:53.508077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.404 [2024-04-17 08:39:53.508086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.404 [2024-04-17 08:39:53.512180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.404 [2024-04-17 08:39:53.512225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.404 [2024-04-17 08:39:53.512235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.404 [2024-04-17 08:39:53.515735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.404 [2024-04-17 08:39:53.515773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.404 [2024-04-17 08:39:53.515781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.404 [2024-04-17 08:39:53.519185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.404 [2024-04-17 08:39:53.519224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.404 [2024-04-17 08:39:53.519233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.404 [2024-04-17 08:39:53.522131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.404 [2024-04-17 08:39:53.522168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.404 [2024-04-17 08:39:53.522177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.404 [2024-04-17 08:39:53.525683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.404 [2024-04-17 08:39:53.525721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.404 [2024-04-17 08:39:53.525730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.404 [2024-04-17 08:39:53.529671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.404 [2024-04-17 08:39:53.529707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.404 [2024-04-17 08:39:53.529715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.404 [2024-04-17 08:39:53.532781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.404 [2024-04-17 08:39:53.532814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.404 [2024-04-17 08:39:53.532838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.404 [2024-04-17 08:39:53.535584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.404 [2024-04-17 08:39:53.535617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.404 [2024-04-17 08:39:53.535626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.404 [2024-04-17 08:39:53.539047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.404 [2024-04-17 08:39:53.539084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.404 [2024-04-17 08:39:53.539092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.404 [2024-04-17 08:39:53.542198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.405 [2024-04-17 08:39:53.542235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.405 [2024-04-17 08:39:53.542244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.405 [2024-04-17 08:39:53.545320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.405 [2024-04-17 08:39:53.545354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.405 [2024-04-17 08:39:53.545363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.405 [2024-04-17 08:39:53.549646] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.405 [2024-04-17 08:39:53.549688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.405 [2024-04-17 08:39:53.549698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.405 [2024-04-17 08:39:53.553593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.405 [2024-04-17 08:39:53.553632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.405 [2024-04-17 08:39:53.553642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.405 [2024-04-17 08:39:53.556877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.405 [2024-04-17 08:39:53.556915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.405 [2024-04-17 08:39:53.556924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.405 [2024-04-17 08:39:53.560244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.405 [2024-04-17 08:39:53.560283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.405 [2024-04-17 08:39:53.560292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.405 [2024-04-17 08:39:53.564042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.405 [2024-04-17 08:39:53.564079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.405 [2024-04-17 08:39:53.564088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.405 [2024-04-17 08:39:53.567653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.405 [2024-04-17 08:39:53.567692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.405 [2024-04-17 08:39:53.567701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.405 [2024-04-17 08:39:53.570722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.405 [2024-04-17 08:39:53.570756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.405 [2024-04-17 08:39:53.570765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.405 [2024-04-17 08:39:53.575053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.405 [2024-04-17 08:39:53.575097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.405 [2024-04-17 08:39:53.575107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.405 [2024-04-17 08:39:53.578437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.405 [2024-04-17 08:39:53.578474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.405 [2024-04-17 08:39:53.578482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.405 [2024-04-17 08:39:53.582477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.405 [2024-04-17 08:39:53.582521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.405 [2024-04-17 08:39:53.582531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.405 [2024-04-17 08:39:53.586233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.405 [2024-04-17 08:39:53.586274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.405 [2024-04-17 08:39:53.586283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.405 [2024-04-17 08:39:53.590369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.405 [2024-04-17 08:39:53.590423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.405 [2024-04-17 08:39:53.590434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.405 [2024-04-17 08:39:53.594003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.405 [2024-04-17 08:39:53.594040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.405 [2024-04-17 08:39:53.594049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.405 [2024-04-17 08:39:53.597823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.405 [2024-04-17 08:39:53.597863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.405 [2024-04-17 08:39:53.597873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.405 [2024-04-17 08:39:53.602188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.405 [2024-04-17 08:39:53.602231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.405 [2024-04-17 08:39:53.602241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.405 [2024-04-17 08:39:53.605758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.405 [2024-04-17 08:39:53.605797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.405 [2024-04-17 08:39:53.605806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.405 [2024-04-17 08:39:53.609744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.405 [2024-04-17 08:39:53.609781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.405 [2024-04-17 08:39:53.609789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.405 [2024-04-17 08:39:53.613621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.405 [2024-04-17 08:39:53.613660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.405 [2024-04-17 08:39:53.613669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.405 [2024-04-17 08:39:53.616826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.405 [2024-04-17 08:39:53.616863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.405 [2024-04-17 08:39:53.616871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.405 [2024-04-17 08:39:53.620051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.405 [2024-04-17 08:39:53.620092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.405 [2024-04-17 08:39:53.620101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.405 [2024-04-17 08:39:53.623688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.405 [2024-04-17 08:39:53.623729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.405 [2024-04-17 08:39:53.623738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.405 [2024-04-17 08:39:53.627485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.405 [2024-04-17 08:39:53.627524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.405 [2024-04-17 08:39:53.627533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.405 [2024-04-17 08:39:53.631157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.405 [2024-04-17 08:39:53.631198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.405 [2024-04-17 08:39:53.631207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.405 [2024-04-17 08:39:53.634210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.405 [2024-04-17 08:39:53.634247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.405 [2024-04-17 08:39:53.634257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.405 [2024-04-17 08:39:53.638004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.405 [2024-04-17 08:39:53.638044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.405 [2024-04-17 08:39:53.638053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.405 [2024-04-17 08:39:53.642028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.406 [2024-04-17 08:39:53.642070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.406 [2024-04-17 08:39:53.642079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.406 [2024-04-17 08:39:53.645254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.406 [2024-04-17 08:39:53.645290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.406 [2024-04-17 08:39:53.645299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.406 [2024-04-17 08:39:53.648741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.406 [2024-04-17 08:39:53.648779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.406 [2024-04-17 08:39:53.648787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.406 [2024-04-17 08:39:53.652130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.406 [2024-04-17 08:39:53.652166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.406 [2024-04-17 08:39:53.652174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.406 [2024-04-17 08:39:53.655314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.406 [2024-04-17 08:39:53.655350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.406 [2024-04-17 08:39:53.655370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.406 [2024-04-17 08:39:53.658490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.406 [2024-04-17 08:39:53.658525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.406 [2024-04-17 08:39:53.658534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.406 [2024-04-17 08:39:53.661600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.406 [2024-04-17 08:39:53.661645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.406 [2024-04-17 08:39:53.661653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.406 [2024-04-17 08:39:53.664919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.406 [2024-04-17 08:39:53.664953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.406 [2024-04-17 08:39:53.664961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.406 [2024-04-17 08:39:53.668817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.406 [2024-04-17 08:39:53.668852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.406 [2024-04-17 08:39:53.668861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.406 [2024-04-17 08:39:53.672818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.406 [2024-04-17 08:39:53.672854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.406 [2024-04-17 08:39:53.672863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.406 [2024-04-17 08:39:53.676899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.406 [2024-04-17 08:39:53.676938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.406 [2024-04-17 08:39:53.676947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.406 [2024-04-17 08:39:53.680938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.406 [2024-04-17 08:39:53.680973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.406 [2024-04-17 08:39:53.680982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.406 [2024-04-17 08:39:53.685089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.406 [2024-04-17 08:39:53.685128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.406 [2024-04-17 08:39:53.685137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.406 [2024-04-17 08:39:53.688730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.406 [2024-04-17 08:39:53.688769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.406 [2024-04-17 08:39:53.688778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.406 [2024-04-17 08:39:53.692370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.406 [2024-04-17 08:39:53.692424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.406 [2024-04-17 08:39:53.692433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.406 [2024-04-17 08:39:53.695764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.406 [2024-04-17 08:39:53.695803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.406 [2024-04-17 08:39:53.695811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.406 [2024-04-17 08:39:53.699445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.406 [2024-04-17 08:39:53.699487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.406 [2024-04-17 08:39:53.699496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.406 [2024-04-17 08:39:53.703083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.406 [2024-04-17 08:39:53.703122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.406 [2024-04-17 08:39:53.703132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.406 [2024-04-17 08:39:53.706683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.406 [2024-04-17 08:39:53.706721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.406 [2024-04-17 08:39:53.706729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.406 [2024-04-17 08:39:53.710263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.406 [2024-04-17 08:39:53.710299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.406 [2024-04-17 08:39:53.710308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.406 [2024-04-17 08:39:53.713943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.406 [2024-04-17 08:39:53.714002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.406 [2024-04-17 08:39:53.714011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.406 [2024-04-17 08:39:53.717785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.406 [2024-04-17 08:39:53.717822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.406 [2024-04-17 08:39:53.717830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.406 [2024-04-17 08:39:53.720871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.406 [2024-04-17 08:39:53.720904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.406 [2024-04-17 08:39:53.720913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.406 [2024-04-17 08:39:53.724313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.406 [2024-04-17 08:39:53.724347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.406 [2024-04-17 08:39:53.724355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.406 [2024-04-17 08:39:53.728164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.406 [2024-04-17 08:39:53.728204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.406 [2024-04-17 08:39:53.728229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.406 [2024-04-17 08:39:53.731376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.406 [2024-04-17 08:39:53.731429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.406 [2024-04-17 08:39:53.731437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.668 [2024-04-17 08:39:53.734853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.668 [2024-04-17 08:39:53.734897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.668 [2024-04-17 08:39:53.734906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.668 [2024-04-17 08:39:53.738149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.668 [2024-04-17 08:39:53.738188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.668 [2024-04-17 08:39:53.738196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.668 [2024-04-17 08:39:53.741284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.668 [2024-04-17 08:39:53.741319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.668 [2024-04-17 08:39:53.741329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.668 [2024-04-17 08:39:53.744564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.668 [2024-04-17 08:39:53.744601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.668 [2024-04-17 08:39:53.744610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.668 [2024-04-17 08:39:53.747406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.668 [2024-04-17 08:39:53.747455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.668 [2024-04-17 08:39:53.747464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.668 [2024-04-17 08:39:53.750765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.668 [2024-04-17 08:39:53.750804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.668 [2024-04-17 08:39:53.750812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.668 [2024-04-17 08:39:53.754120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.668 [2024-04-17 08:39:53.754159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.668 [2024-04-17 08:39:53.754168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.668 [2024-04-17 08:39:53.757192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.668 [2024-04-17 08:39:53.757229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.668 [2024-04-17 08:39:53.757238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.668 [2024-04-17 08:39:53.761445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.668 [2024-04-17 08:39:53.761509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.668 [2024-04-17 08:39:53.761518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.668 [2024-04-17 08:39:53.764617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.668 [2024-04-17 08:39:53.764657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.668 [2024-04-17 08:39:53.764665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.668 [2024-04-17 08:39:53.768504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.668 [2024-04-17 08:39:53.768542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.668 [2024-04-17 08:39:53.768550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.668 [2024-04-17 08:39:53.772225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.668 [2024-04-17 08:39:53.772271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.668 [2024-04-17 08:39:53.772279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.668 [2024-04-17 08:39:53.775640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.668 [2024-04-17 08:39:53.775696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.668 [2024-04-17 08:39:53.775704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.668 [2024-04-17 08:39:53.778634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.668 [2024-04-17 08:39:53.778675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.668 [2024-04-17 08:39:53.778684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.668 [2024-04-17 08:39:53.781531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.668 [2024-04-17 08:39:53.781567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.668 [2024-04-17 08:39:53.781575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.668 [2024-04-17 08:39:53.784938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.668 [2024-04-17 08:39:53.784972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.668 [2024-04-17 08:39:53.784980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.668 [2024-04-17 08:39:53.788561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.668 [2024-04-17 08:39:53.788591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.668 [2024-04-17 08:39:53.788599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.668 [2024-04-17 08:39:53.792076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.668 [2024-04-17 08:39:53.792108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.668 [2024-04-17 08:39:53.792116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.668 [2024-04-17 08:39:53.796454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.668 [2024-04-17 08:39:53.796502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.668 [2024-04-17 08:39:53.796512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.668 [2024-04-17 08:39:53.800203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.668 [2024-04-17 08:39:53.800252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.668 [2024-04-17 08:39:53.800261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.668 [2024-04-17 08:39:53.803901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.668 [2024-04-17 08:39:53.803942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.668 [2024-04-17 08:39:53.803951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.668 [2024-04-17 08:39:53.807272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.668 [2024-04-17 08:39:53.807308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.668 [2024-04-17 08:39:53.807316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.668 [2024-04-17 08:39:53.810449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.668 [2024-04-17 08:39:53.810482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.668 [2024-04-17 08:39:53.810491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.668 [2024-04-17 08:39:53.814201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.668 [2024-04-17 08:39:53.814237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.668 [2024-04-17 08:39:53.814246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.668 [2024-04-17 08:39:53.817779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.669 [2024-04-17 08:39:53.817811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.669 [2024-04-17 08:39:53.817819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.669 [2024-04-17 08:39:53.822059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.669 [2024-04-17 08:39:53.822097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.669 [2024-04-17 08:39:53.822105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.669 [2024-04-17 08:39:53.825826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.669 [2024-04-17 08:39:53.825862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.669 [2024-04-17 08:39:53.825871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.669 [2024-04-17 08:39:53.829703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.669 [2024-04-17 08:39:53.829739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.669 [2024-04-17 08:39:53.829748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.669 [2024-04-17 08:39:53.833011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.669 [2024-04-17 08:39:53.833048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.669 [2024-04-17 08:39:53.833057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.669 [2024-04-17 08:39:53.836561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.669 [2024-04-17 08:39:53.836596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.669 [2024-04-17 08:39:53.836605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.669 [2024-04-17 08:39:53.839956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.669 [2024-04-17 08:39:53.839993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.669 [2024-04-17 08:39:53.840002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.669 [2024-04-17 08:39:53.843725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.669 [2024-04-17 08:39:53.843763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.669 [2024-04-17 08:39:53.843772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.669 [2024-04-17 08:39:53.847049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.669 [2024-04-17 08:39:53.847085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.669 [2024-04-17 08:39:53.847094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.669 [2024-04-17 08:39:53.850542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.669 [2024-04-17 08:39:53.850576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.669 [2024-04-17 08:39:53.850585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.669 [2024-04-17 08:39:53.854900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.669 [2024-04-17 08:39:53.854938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.669 [2024-04-17 08:39:53.854947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.669 [2024-04-17 08:39:53.858433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.669 [2024-04-17 08:39:53.858474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.669 [2024-04-17 08:39:53.858483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.669 [2024-04-17 08:39:53.861981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.669 [2024-04-17 08:39:53.862018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.669 [2024-04-17 08:39:53.862027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.669 [2024-04-17 08:39:53.865665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.669 [2024-04-17 08:39:53.865700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.669 [2024-04-17 08:39:53.865709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.669 [2024-04-17 08:39:53.869358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.669 [2024-04-17 08:39:53.869408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.669 [2024-04-17 08:39:53.869418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.669 [2024-04-17 08:39:53.873030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.669 [2024-04-17 08:39:53.873069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.669 [2024-04-17 08:39:53.873078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.669 [2024-04-17 08:39:53.877009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.669 [2024-04-17 08:39:53.877047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.669 [2024-04-17 08:39:53.877057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.669 [2024-04-17 08:39:53.880215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.669 [2024-04-17 08:39:53.880252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.669 [2024-04-17 08:39:53.880261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.669 [2024-04-17 08:39:53.883840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.669 [2024-04-17 08:39:53.883879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.669 [2024-04-17 08:39:53.883889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.669 [2024-04-17 08:39:53.887498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.669 [2024-04-17 08:39:53.887537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.669 [2024-04-17 08:39:53.887547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.669 [2024-04-17 08:39:53.891538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.669 [2024-04-17 08:39:53.891578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.669 [2024-04-17 08:39:53.891587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.669 [2024-04-17 08:39:53.895271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.669 [2024-04-17 08:39:53.895307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.669 [2024-04-17 08:39:53.895316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.669 [2024-04-17 08:39:53.899073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.669 [2024-04-17 08:39:53.899107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.669 [2024-04-17 08:39:53.899116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.669 [2024-04-17 08:39:53.901841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.669 [2024-04-17 08:39:53.901870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.669 [2024-04-17 08:39:53.901879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.669 [2024-04-17 08:39:53.905550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.669 [2024-04-17 08:39:53.905579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.669 [2024-04-17 08:39:53.905588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.669 [2024-04-17 08:39:53.909268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.669 [2024-04-17 08:39:53.909298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.669 [2024-04-17 08:39:53.909307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.669 [2024-04-17 08:39:53.913121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.669 [2024-04-17 08:39:53.913155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.669 [2024-04-17 08:39:53.913165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.669 [2024-04-17 08:39:53.917207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.669 [2024-04-17 08:39:53.917242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.670 [2024-04-17 08:39:53.917250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.670 [2024-04-17 08:39:53.921178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.670 [2024-04-17 08:39:53.921210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.670 [2024-04-17 08:39:53.921218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.670 [2024-04-17 08:39:53.924555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.670 [2024-04-17 08:39:53.924583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.670 [2024-04-17 08:39:53.924591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.670 [2024-04-17 08:39:53.928540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.670 [2024-04-17 08:39:53.928573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.670 [2024-04-17 08:39:53.928582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.670 [2024-04-17 08:39:53.932308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.670 [2024-04-17 08:39:53.932342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.670 [2024-04-17 08:39:53.932351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.670 [2024-04-17 08:39:53.936252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.670 [2024-04-17 08:39:53.936289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.670 [2024-04-17 08:39:53.936298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.670 [2024-04-17 08:39:53.940113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.670 [2024-04-17 08:39:53.940146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.670 [2024-04-17 08:39:53.940155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.670 [2024-04-17 08:39:53.944397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.670 [2024-04-17 08:39:53.944450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.670 [2024-04-17 08:39:53.944458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.670 [2024-04-17 08:39:53.947751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.670 [2024-04-17 08:39:53.947786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.670 [2024-04-17 08:39:53.947795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.670 [2024-04-17 08:39:53.951235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.670 [2024-04-17 08:39:53.951273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.670 [2024-04-17 08:39:53.951282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.670 [2024-04-17 08:39:53.954558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.670 [2024-04-17 08:39:53.954589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.670 [2024-04-17 08:39:53.954598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.670 [2024-04-17 08:39:53.958578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.670 [2024-04-17 08:39:53.958614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.670 [2024-04-17 08:39:53.958623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.670 [2024-04-17 08:39:53.962492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.670 [2024-04-17 08:39:53.962522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.670 [2024-04-17 08:39:53.962530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.670 [2024-04-17 08:39:53.966363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.670 [2024-04-17 08:39:53.966411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.670 [2024-04-17 08:39:53.966421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.670 [2024-04-17 08:39:53.970129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.670 [2024-04-17 08:39:53.970163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.670 [2024-04-17 08:39:53.970172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.670 [2024-04-17 08:39:53.973180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.670 [2024-04-17 08:39:53.973214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.670 [2024-04-17 08:39:53.973222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.670 [2024-04-17 08:39:53.977268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.670 [2024-04-17 08:39:53.977300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.670 [2024-04-17 08:39:53.977308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.670 [2024-04-17 08:39:53.981230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.670 [2024-04-17 08:39:53.981261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.670 [2024-04-17 08:39:53.981270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.670 [2024-04-17 08:39:53.985484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.670 [2024-04-17 08:39:53.985514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.670 [2024-04-17 08:39:53.985523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.670 [2024-04-17 08:39:53.989568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.670 [2024-04-17 08:39:53.989598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.670 [2024-04-17 08:39:53.989606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.670 [2024-04-17 08:39:53.993464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.670 [2024-04-17 08:39:53.993493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.670 [2024-04-17 08:39:53.993502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.931 [2024-04-17 08:39:53.997216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.931 [2024-04-17 08:39:53.997249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.931 [2024-04-17 08:39:53.997257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.931 [2024-04-17 08:39:54.001199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.931 [2024-04-17 08:39:54.001234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.931 [2024-04-17 08:39:54.001242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.931 [2024-04-17 08:39:54.005123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.931 [2024-04-17 08:39:54.005155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.931 [2024-04-17 08:39:54.005162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.931 [2024-04-17 08:39:54.009189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.931 [2024-04-17 08:39:54.009222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.931 [2024-04-17 08:39:54.009230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.931 [2024-04-17 08:39:54.013113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.931 [2024-04-17 08:39:54.013146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.931 [2024-04-17 08:39:54.013154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.931 [2024-04-17 08:39:54.016948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.931 [2024-04-17 08:39:54.016978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.931 [2024-04-17 08:39:54.016985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.931 [2024-04-17 08:39:54.020865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.931 [2024-04-17 08:39:54.020897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.931 [2024-04-17 08:39:54.020904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.931 [2024-04-17 08:39:54.024652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.931 [2024-04-17 08:39:54.024684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.931 [2024-04-17 08:39:54.024693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.931 [2024-04-17 08:39:54.028651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.931 [2024-04-17 08:39:54.028687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.931 [2024-04-17 08:39:54.028695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.931 [2024-04-17 08:39:54.032320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.931 [2024-04-17 08:39:54.032355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.931 [2024-04-17 08:39:54.032364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.931 [2024-04-17 08:39:54.036299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.931 [2024-04-17 08:39:54.036334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.931 [2024-04-17 08:39:54.036342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.931 [2024-04-17 08:39:54.040258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.931 [2024-04-17 08:39:54.040289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.931 [2024-04-17 08:39:54.040297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.931 [2024-04-17 08:39:54.044097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.931 [2024-04-17 08:39:54.044131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.931 [2024-04-17 08:39:54.044139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.931 [2024-04-17 08:39:54.048186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.931 [2024-04-17 08:39:54.048220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.931 [2024-04-17 08:39:54.048228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.931 [2024-04-17 08:39:54.052075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.931 [2024-04-17 08:39:54.052107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.931 [2024-04-17 08:39:54.052115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.931 [2024-04-17 08:39:54.055913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.931 [2024-04-17 08:39:54.055947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.931 [2024-04-17 08:39:54.055956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.931 [2024-04-17 08:39:54.060079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.931 [2024-04-17 08:39:54.060114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.931 [2024-04-17 08:39:54.060122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.931 [2024-04-17 08:39:54.063992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.931 [2024-04-17 08:39:54.064025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.931 [2024-04-17 08:39:54.064032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.931 [2024-04-17 08:39:54.068093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.931 [2024-04-17 08:39:54.068128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.931 [2024-04-17 08:39:54.068137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.931 [2024-04-17 08:39:54.071868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.931 [2024-04-17 08:39:54.071901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.931 [2024-04-17 08:39:54.071909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.931 [2024-04-17 08:39:54.075727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.931 [2024-04-17 08:39:54.075761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.931 [2024-04-17 08:39:54.075769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.932 [2024-04-17 08:39:54.079619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.932 [2024-04-17 08:39:54.079651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.932 [2024-04-17 08:39:54.079659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.932 [2024-04-17 08:39:54.083513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.932 [2024-04-17 08:39:54.083543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.932 [2024-04-17 08:39:54.083551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.932 [2024-04-17 08:39:54.087464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.932 [2024-04-17 08:39:54.087496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.932 [2024-04-17 08:39:54.087504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.932 [2024-04-17 08:39:54.091151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.932 [2024-04-17 08:39:54.091196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.932 [2024-04-17 08:39:54.091203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.932 [2024-04-17 08:39:54.095123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.932 [2024-04-17 08:39:54.095157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.932 [2024-04-17 08:39:54.095165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.932 [2024-04-17 08:39:54.098368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.932 [2024-04-17 08:39:54.098414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.932 [2024-04-17 08:39:54.098424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.932 [2024-04-17 08:39:54.102271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.932 [2024-04-17 08:39:54.102303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.932 [2024-04-17 08:39:54.102312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.932 [2024-04-17 08:39:54.106126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.932 [2024-04-17 08:39:54.106157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.932 [2024-04-17 08:39:54.106166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.932 [2024-04-17 08:39:54.110164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.932 [2024-04-17 08:39:54.110196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.932 [2024-04-17 08:39:54.110205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.932 [2024-04-17 08:39:54.113988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.932 [2024-04-17 08:39:54.114019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.932 [2024-04-17 08:39:54.114027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.932 [2024-04-17 08:39:54.117673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.932 [2024-04-17 08:39:54.117704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.932 [2024-04-17 08:39:54.117712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.932 [2024-04-17 08:39:54.121459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.932 [2024-04-17 08:39:54.121488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.932 [2024-04-17 08:39:54.121496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.932 [2024-04-17 08:39:54.125100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.932 [2024-04-17 08:39:54.125132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.932 [2024-04-17 08:39:54.125141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.932 [2024-04-17 08:39:54.127947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.932 [2024-04-17 08:39:54.127979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.932 [2024-04-17 08:39:54.127986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.932 [2024-04-17 08:39:54.131267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.932 [2024-04-17 08:39:54.131301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.932 [2024-04-17 08:39:54.131309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.932 [2024-04-17 08:39:54.134186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.932 [2024-04-17 08:39:54.134219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.932 [2024-04-17 08:39:54.134227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.932 [2024-04-17 08:39:54.137072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.932 [2024-04-17 08:39:54.137100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.932 [2024-04-17 08:39:54.137108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.932 [2024-04-17 08:39:54.140219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.932 [2024-04-17 08:39:54.140252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.932 [2024-04-17 08:39:54.140261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.932 [2024-04-17 08:39:54.144043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.932 [2024-04-17 08:39:54.144074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.932 [2024-04-17 08:39:54.144081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.932 [2024-04-17 08:39:54.147417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.932 [2024-04-17 08:39:54.147444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.932 [2024-04-17 08:39:54.147451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.932 [2024-04-17 08:39:54.151335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.932 [2024-04-17 08:39:54.151367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.932 [2024-04-17 08:39:54.151376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.932 [2024-04-17 08:39:54.155280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.932 [2024-04-17 08:39:54.155315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.932 [2024-04-17 08:39:54.155324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.932 [2024-04-17 08:39:54.159213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.932 [2024-04-17 08:39:54.159266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.932 [2024-04-17 08:39:54.159275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.932 [2024-04-17 08:39:54.162800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.932 [2024-04-17 08:39:54.162836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.932 [2024-04-17 08:39:54.162845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.932 [2024-04-17 08:39:54.166859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.932 [2024-04-17 08:39:54.166899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.932 [2024-04-17 08:39:54.166908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.932 [2024-04-17 08:39:54.170362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.932 [2024-04-17 08:39:54.170420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.932 [2024-04-17 08:39:54.170430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.932 [2024-04-17 08:39:54.173825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.932 [2024-04-17 08:39:54.173862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.932 [2024-04-17 08:39:54.173870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.932 [2024-04-17 08:39:54.177301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.933 [2024-04-17 08:39:54.177339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.933 [2024-04-17 08:39:54.177349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.933 [2024-04-17 08:39:54.180535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.933 [2024-04-17 08:39:54.180570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.933 [2024-04-17 08:39:54.180579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.933 [2024-04-17 08:39:54.184086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.933 [2024-04-17 08:39:54.184122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.933 [2024-04-17 08:39:54.184131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.933 [2024-04-17 08:39:54.187206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.933 [2024-04-17 08:39:54.187242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.933 [2024-04-17 08:39:54.187251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.933 [2024-04-17 08:39:54.190187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.933 [2024-04-17 08:39:54.190220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.933 [2024-04-17 08:39:54.190228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.933 [2024-04-17 08:39:54.194143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.933 [2024-04-17 08:39:54.194176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.933 [2024-04-17 08:39:54.194185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.933 [2024-04-17 08:39:54.197684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.933 [2024-04-17 08:39:54.197715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.933 [2024-04-17 08:39:54.197723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.933 [2024-04-17 08:39:54.201873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.933 [2024-04-17 08:39:54.201908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.933 [2024-04-17 08:39:54.201917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.933 [2024-04-17 08:39:54.205709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.933 [2024-04-17 08:39:54.205742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.933 [2024-04-17 08:39:54.205751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.933 [2024-04-17 08:39:54.209508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.933 [2024-04-17 08:39:54.209551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.933 [2024-04-17 08:39:54.209560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.933 [2024-04-17 08:39:54.213084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.933 [2024-04-17 08:39:54.213115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.933 [2024-04-17 08:39:54.213122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.933 [2024-04-17 08:39:54.216772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.933 [2024-04-17 08:39:54.216805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.933 [2024-04-17 08:39:54.216814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.933 [2024-04-17 08:39:54.220914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.933 [2024-04-17 08:39:54.220949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.933 [2024-04-17 08:39:54.220957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.933 [2024-04-17 08:39:54.223914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.933 [2024-04-17 08:39:54.223947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.933 [2024-04-17 08:39:54.223955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.933 [2024-04-17 08:39:54.227906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.933 [2024-04-17 08:39:54.227943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.933 [2024-04-17 08:39:54.227952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.933 [2024-04-17 08:39:54.231818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.933 [2024-04-17 08:39:54.231856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.933 [2024-04-17 08:39:54.231863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.933 [2024-04-17 08:39:54.235663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.933 [2024-04-17 08:39:54.235696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.933 [2024-04-17 08:39:54.235704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.933 [2024-04-17 08:39:54.239349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.933 [2024-04-17 08:39:54.239384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.933 [2024-04-17 08:39:54.239418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.933 [2024-04-17 08:39:54.243217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.933 [2024-04-17 08:39:54.243254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.933 [2024-04-17 08:39:54.243262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:20.933 [2024-04-17 08:39:54.246293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.933 [2024-04-17 08:39:54.246328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.933 [2024-04-17 08:39:54.246337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:20.933 [2024-04-17 08:39:54.250043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.933 [2024-04-17 08:39:54.250077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.933 [2024-04-17 08:39:54.250086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:20.933 [2024-04-17 08:39:54.253908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.933 [2024-04-17 08:39:54.253945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.933 [2024-04-17 08:39:54.253961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:20.933 [2024-04-17 08:39:54.257191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:20.933 [2024-04-17 08:39:54.257225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:20.933 [2024-04-17 08:39:54.257234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:21.194 [2024-04-17 08:39:54.260979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.194 [2024-04-17 08:39:54.261020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.194 [2024-04-17 08:39:54.261028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:21.194 [2024-04-17 08:39:54.264720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.194 [2024-04-17 08:39:54.264760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.194 [2024-04-17 08:39:54.264770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:21.194 [2024-04-17 08:39:54.268566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.194 [2024-04-17 08:39:54.268609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.194 [2024-04-17 08:39:54.268619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:21.194 [2024-04-17 08:39:54.271987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.194 [2024-04-17 08:39:54.272026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.194 [2024-04-17 08:39:54.272035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:21.194 [2024-04-17 08:39:54.275577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.194 [2024-04-17 08:39:54.275611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.194 [2024-04-17 08:39:54.275620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:21.194 [2024-04-17 08:39:54.279015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.194 [2024-04-17 08:39:54.279050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.194 [2024-04-17 08:39:54.279059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:21.194 [2024-04-17 08:39:54.282755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.194 [2024-04-17 08:39:54.282789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.194 [2024-04-17 08:39:54.282797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:21.194 [2024-04-17 08:39:54.286434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.194 [2024-04-17 08:39:54.286465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.194 [2024-04-17 08:39:54.286473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:21.194 [2024-04-17 08:39:54.290200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.194 [2024-04-17 08:39:54.290236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.194 [2024-04-17 08:39:54.290244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:21.194 [2024-04-17 08:39:54.293668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.194 [2024-04-17 08:39:54.293703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.194 [2024-04-17 08:39:54.293711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:21.194 [2024-04-17 08:39:54.297287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.194 [2024-04-17 08:39:54.297318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.194 [2024-04-17 08:39:54.297327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:21.194 [2024-04-17 08:39:54.301267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.194 [2024-04-17 08:39:54.301300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.194 [2024-04-17 08:39:54.301309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:21.194 [2024-04-17 08:39:54.304607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.194 [2024-04-17 08:39:54.304640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.194 [2024-04-17 08:39:54.304649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:21.194 [2024-04-17 08:39:54.308727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.194 [2024-04-17 08:39:54.308763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.194 [2024-04-17 08:39:54.308772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:21.194 [2024-04-17 08:39:54.312314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.194 [2024-04-17 08:39:54.312351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.194 [2024-04-17 08:39:54.312359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:21.194 [2024-04-17 08:39:54.316411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.194 [2024-04-17 08:39:54.316456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.194 [2024-04-17 08:39:54.316466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:21.194 [2024-04-17 08:39:54.320158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.194 [2024-04-17 08:39:54.320191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.194 [2024-04-17 08:39:54.320200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:21.194 [2024-04-17 08:39:54.324167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.194 [2024-04-17 08:39:54.324199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.194 [2024-04-17 08:39:54.324208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:21.194 [2024-04-17 08:39:54.328021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.194 [2024-04-17 08:39:54.328068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.194 [2024-04-17 08:39:54.328076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:21.194 [2024-04-17 08:39:54.331967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.194 [2024-04-17 08:39:54.332001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.194 [2024-04-17 08:39:54.332010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:21.194 [2024-04-17 08:39:54.335786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.194 [2024-04-17 08:39:54.335819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.194 [2024-04-17 08:39:54.335826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:21.194 [2024-04-17 08:39:54.339108] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.194 [2024-04-17 08:39:54.339146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.194 [2024-04-17 08:39:54.339155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:21.194 [2024-04-17 08:39:54.342455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.194 [2024-04-17 08:39:54.342487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.195 [2024-04-17 08:39:54.342496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:21.195 [2024-04-17 08:39:54.345600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.195 [2024-04-17 08:39:54.345640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.195 [2024-04-17 08:39:54.345649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:21.195 [2024-04-17 08:39:54.349014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.195 [2024-04-17 08:39:54.349056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.195 [2024-04-17 08:39:54.349066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:21.195 [2024-04-17 08:39:54.352612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.195 [2024-04-17 08:39:54.352649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.195 [2024-04-17 08:39:54.352658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:21.195 [2024-04-17 08:39:54.357016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.195 [2024-04-17 08:39:54.357059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.195 [2024-04-17 08:39:54.357069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:21.195 [2024-04-17 08:39:54.360643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.195 [2024-04-17 08:39:54.360685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.195 [2024-04-17 08:39:54.360693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:21.195 [2024-04-17 08:39:54.363993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.195 [2024-04-17 08:39:54.364033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.195 [2024-04-17 08:39:54.364042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:21.195 [2024-04-17 08:39:54.367439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.195 [2024-04-17 08:39:54.367475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.195 [2024-04-17 08:39:54.367482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:21.195 [2024-04-17 08:39:54.371090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.195 [2024-04-17 08:39:54.371128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.195 [2024-04-17 08:39:54.371137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:21.195 [2024-04-17 08:39:54.375063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.195 [2024-04-17 08:39:54.375104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.195 [2024-04-17 08:39:54.375112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:21.195 [2024-04-17 08:39:54.379036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.195 [2024-04-17 08:39:54.379078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.195 [2024-04-17 08:39:54.379087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:21.195 [2024-04-17 08:39:54.381631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.195 [2024-04-17 08:39:54.381668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.195 [2024-04-17 08:39:54.381677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:21.195 [2024-04-17 08:39:54.385352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.195 [2024-04-17 08:39:54.385402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.195 [2024-04-17 08:39:54.385412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:21.195 [2024-04-17 08:39:54.389355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.195 [2024-04-17 08:39:54.389401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.195 [2024-04-17 08:39:54.389411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:21.195 [2024-04-17 08:39:54.393349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.195 [2024-04-17 08:39:54.393383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.195 [2024-04-17 08:39:54.393391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:21.195 [2024-04-17 08:39:54.396909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.195 [2024-04-17 08:39:54.396941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.195 [2024-04-17 08:39:54.396948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:21.195 [2024-04-17 08:39:54.400573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.195 [2024-04-17 08:39:54.400603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.195 [2024-04-17 08:39:54.400612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:21.195 [2024-04-17 08:39:54.404197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.195 [2024-04-17 08:39:54.404229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.195 [2024-04-17 08:39:54.404236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:21.195 [2024-04-17 08:39:54.407979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.195 [2024-04-17 08:39:54.408009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.195 [2024-04-17 08:39:54.408017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:21.195 [2024-04-17 08:39:54.411483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.195 [2024-04-17 08:39:54.411517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.195 [2024-04-17 08:39:54.411526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:21.195 [2024-04-17 08:39:54.415162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.195 [2024-04-17 08:39:54.415195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.195 [2024-04-17 08:39:54.415203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:21.195 [2024-04-17 08:39:54.418961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.195 [2024-04-17 08:39:54.418994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.195 [2024-04-17 08:39:54.419003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:21.195 [2024-04-17 08:39:54.422430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.195 [2024-04-17 08:39:54.422459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.195 [2024-04-17 08:39:54.422466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:21.195 [2024-04-17 08:39:54.426333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.195 [2024-04-17 08:39:54.426367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.195 [2024-04-17 08:39:54.426375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:21.195 [2024-04-17 08:39:54.429535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.195 [2024-04-17 08:39:54.429567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.195 [2024-04-17 08:39:54.429575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:21.195 [2024-04-17 08:39:54.433417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.195 [2024-04-17 08:39:54.433446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.195 [2024-04-17 08:39:54.433453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:21.195 [2024-04-17 08:39:54.437341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.195 [2024-04-17 08:39:54.437378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.195 [2024-04-17 08:39:54.437387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:21.195 [2024-04-17 08:39:54.441517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.195 [2024-04-17 08:39:54.441551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.195 [2024-04-17 08:39:54.441559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:21.196 [2024-04-17 08:39:54.445050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.196 [2024-04-17 08:39:54.445084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.196 [2024-04-17 08:39:54.445093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:21.196 [2024-04-17 08:39:54.448613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.196 [2024-04-17 08:39:54.448646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.196 [2024-04-17 08:39:54.448653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:21.196 [2024-04-17 08:39:54.451968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.196 [2024-04-17 08:39:54.451999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.196 [2024-04-17 08:39:54.452006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:21.196 [2024-04-17 08:39:54.455257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.196 [2024-04-17 08:39:54.455285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.196 [2024-04-17 08:39:54.455292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:21.196 [2024-04-17 08:39:54.459010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.196 [2024-04-17 08:39:54.459044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.196 [2024-04-17 08:39:54.459052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:21.196 [2024-04-17 08:39:54.462550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.196 [2024-04-17 08:39:54.462580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.196 [2024-04-17 08:39:54.462589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:21.196 [2024-04-17 08:39:54.466244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.196 [2024-04-17 08:39:54.466274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.196 [2024-04-17 08:39:54.466281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:21.196 [2024-04-17 08:39:54.469783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.196 [2024-04-17 08:39:54.469816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.196 [2024-04-17 08:39:54.469825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:21.196 [2024-04-17 08:39:54.473740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1263a30) 00:43:21.196 [2024-04-17 08:39:54.473773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.196 [2024-04-17 08:39:54.473781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:21.196 00:43:21.196 Latency(us) 00:43:21.196 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:21.196 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:43:21.196 nvme0n1 : 2.00 8259.58 1032.45 0.00 0.00 1933.92 550.90 7898.66 00:43:21.196 =================================================================================================================== 00:43:21.196 Total : 8259.58 1032.45 0.00 0.00 1933.92 550.90 7898.66 00:43:21.196 0 00:43:21.196 08:39:54 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:43:21.196 08:39:54 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:43:21.196 | .driver_specific 00:43:21.196 | .nvme_error 00:43:21.196 | .status_code 00:43:21.196 | .command_transient_transport_error' 00:43:21.196 08:39:54 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:43:21.196 08:39:54 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:43:21.455 08:39:54 -- host/digest.sh@71 -- # (( 533 > 0 )) 00:43:21.455 08:39:54 -- host/digest.sh@73 -- # killprocess 85130 00:43:21.455 08:39:54 -- common/autotest_common.sh@926 -- # '[' -z 85130 ']' 00:43:21.455 08:39:54 -- common/autotest_common.sh@930 -- # kill -0 85130 00:43:21.455 08:39:54 -- common/autotest_common.sh@931 -- # uname 00:43:21.455 08:39:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:43:21.455 08:39:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85130 00:43:21.455 killing process with pid 85130 00:43:21.455 Received shutdown signal, test time was about 2.000000 seconds 00:43:21.455 00:43:21.455 Latency(us) 00:43:21.455 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:21.455 =================================================================================================================== 00:43:21.455 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:21.455 08:39:54 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:43:21.455 08:39:54 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:43:21.455 08:39:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85130' 00:43:21.455 08:39:54 -- common/autotest_common.sh@945 -- # kill 85130 00:43:21.455 08:39:54 -- common/autotest_common.sh@950 -- # wait 85130 00:43:21.715 08:39:54 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:43:21.715 08:39:54 -- host/digest.sh@54 -- # local rw bs qd 00:43:21.715 08:39:54 -- host/digest.sh@56 -- # rw=randwrite 00:43:21.715 08:39:54 -- host/digest.sh@56 -- # bs=4096 00:43:21.715 08:39:54 -- host/digest.sh@56 -- # qd=128 00:43:21.715 08:39:54 -- host/digest.sh@58 -- # bperfpid=85210 00:43:21.715 08:39:54 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:43:21.715 08:39:54 -- host/digest.sh@60 -- # waitforlisten 85210 /var/tmp/bperf.sock 00:43:21.715 08:39:54 -- common/autotest_common.sh@819 -- # '[' -z 85210 ']' 00:43:21.715 08:39:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:21.715 08:39:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:43:21.715 08:39:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:21.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:21.715 08:39:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:43:21.715 08:39:54 -- common/autotest_common.sh@10 -- # set +x 00:43:21.715 [2024-04-17 08:39:55.030826] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:43:21.715 [2024-04-17 08:39:55.030888] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85210 ] 00:43:21.974 [2024-04-17 08:39:55.171024] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:21.974 [2024-04-17 08:39:55.264170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:43:22.911 08:39:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:43:22.911 08:39:55 -- common/autotest_common.sh@852 -- # return 0 00:43:22.911 08:39:55 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:43:22.911 08:39:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:43:22.911 08:39:56 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:43:22.911 08:39:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:22.911 08:39:56 -- common/autotest_common.sh@10 -- # set +x 00:43:22.911 08:39:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:22.911 08:39:56 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:43:22.911 08:39:56 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:43:23.169 nvme0n1 00:43:23.169 08:39:56 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:43:23.169 08:39:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:23.169 08:39:56 -- common/autotest_common.sh@10 -- # set +x 00:43:23.169 08:39:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:23.169 08:39:56 -- host/digest.sh@69 -- # bperf_py perform_tests 00:43:23.169 08:39:56 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:23.169 Running I/O for 2 seconds... 00:43:23.427 [2024-04-17 08:39:56.513764] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190eea00 00:43:23.427 [2024-04-17 08:39:56.514574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.427 [2024-04-17 08:39:56.514601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:23.427 [2024-04-17 08:39:56.522743] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e6300 00:43:23.427 [2024-04-17 08:39:56.523537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.427 [2024-04-17 08:39:56.523566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:43:23.427 [2024-04-17 08:39:56.532367] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190ed0b0 00:43:23.427 [2024-04-17 08:39:56.533347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.427 [2024-04-17 08:39:56.533377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:43:23.427 [2024-04-17 08:39:56.543255] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f3a28 00:43:23.427 [2024-04-17 08:39:56.543717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.427 [2024-04-17 08:39:56.543740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:43:23.427 [2024-04-17 08:39:56.553367] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f20d8 00:43:23.427 [2024-04-17 08:39:56.553849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.427 [2024-04-17 08:39:56.553868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:23.427 [2024-04-17 08:39:56.562804] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f0ff8 00:43:23.427 [2024-04-17 08:39:56.563625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.427 [2024-04-17 08:39:56.563653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:23.427 [2024-04-17 08:39:56.573764] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190ff3c8 00:43:23.427 [2024-04-17 08:39:56.574484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.427 [2024-04-17 08:39:56.574512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:23.427 [2024-04-17 08:39:56.583623] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f1430 00:43:23.427 [2024-04-17 08:39:56.584285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.427 [2024-04-17 08:39:56.584314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:43:23.427 [2024-04-17 08:39:56.593411] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f6020 00:43:23.427 [2024-04-17 08:39:56.594067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.427 [2024-04-17 08:39:56.594094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:43:23.427 [2024-04-17 08:39:56.603240] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f20d8 00:43:23.427 [2024-04-17 08:39:56.603929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.427 [2024-04-17 08:39:56.603961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:43:23.427 [2024-04-17 08:39:56.613060] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f5378 00:43:23.427 [2024-04-17 08:39:56.613748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.427 [2024-04-17 08:39:56.613779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:43:23.427 [2024-04-17 08:39:56.623068] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190ebb98 00:43:23.427 [2024-04-17 08:39:56.623736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.427 [2024-04-17 08:39:56.623767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:43:23.427 [2024-04-17 08:39:56.632981] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e0ea0 00:43:23.427 [2024-04-17 08:39:56.634206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.427 [2024-04-17 08:39:56.634239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:43:23.427 [2024-04-17 08:39:56.643149] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190eb760 00:43:23.427 [2024-04-17 08:39:56.643738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.427 [2024-04-17 08:39:56.643770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:43:23.427 [2024-04-17 08:39:56.655137] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f6890 00:43:23.427 [2024-04-17 08:39:56.656328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.427 [2024-04-17 08:39:56.656358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:23.427 [2024-04-17 08:39:56.662304] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f5378 00:43:23.427 [2024-04-17 08:39:56.662678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.427 [2024-04-17 08:39:56.662725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:43:23.427 [2024-04-17 08:39:56.672736] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e7818 00:43:23.427 [2024-04-17 08:39:56.673325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.427 [2024-04-17 08:39:56.673359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:43:23.427 [2024-04-17 08:39:56.683042] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190eb760 00:43:23.427 [2024-04-17 08:39:56.683736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.427 [2024-04-17 08:39:56.683768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:43:23.427 [2024-04-17 08:39:56.695653] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e73e0 00:43:23.427 [2024-04-17 08:39:56.696513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.427 [2024-04-17 08:39:56.696547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:23.427 [2024-04-17 08:39:56.705935] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e12d8 00:43:23.427 [2024-04-17 08:39:56.706799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.427 [2024-04-17 08:39:56.706831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:43:23.427 [2024-04-17 08:39:56.716239] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e6300 00:43:23.427 [2024-04-17 08:39:56.718052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.427 [2024-04-17 08:39:56.718083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:43:23.427 [2024-04-17 08:39:56.724948] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190fbcf0 00:43:23.427 [2024-04-17 08:39:56.725896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:18311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.427 [2024-04-17 08:39:56.725931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:43:23.427 [2024-04-17 08:39:56.735184] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f6458 00:43:23.427 [2024-04-17 08:39:56.736774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.427 [2024-04-17 08:39:56.736810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:43:23.427 [2024-04-17 08:39:56.746914] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190fb480 00:43:23.428 [2024-04-17 08:39:56.747530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.428 [2024-04-17 08:39:56.747563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:23.428 [2024-04-17 08:39:56.757118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f57b0 00:43:23.428 [2024-04-17 08:39:56.757684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.428 [2024-04-17 08:39:56.757711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:43:23.685 [2024-04-17 08:39:56.768546] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190ef6a8 00:43:23.685 [2024-04-17 08:39:56.769729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.685 [2024-04-17 08:39:56.769759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:43:23.685 [2024-04-17 08:39:56.775729] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190fb480 00:43:23.685 [2024-04-17 08:39:56.776816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.685 [2024-04-17 08:39:56.776847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:43:23.685 [2024-04-17 08:39:56.787679] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f8618 00:43:23.685 [2024-04-17 08:39:56.788446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.685 [2024-04-17 08:39:56.788477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:43:23.685 [2024-04-17 08:39:56.797160] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f6cc8 00:43:23.685 [2024-04-17 08:39:56.798419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.685 [2024-04-17 08:39:56.798448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:43:23.685 [2024-04-17 08:39:56.807444] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e5ec8 00:43:23.685 [2024-04-17 08:39:56.807951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.685 [2024-04-17 08:39:56.807974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:43:23.685 [2024-04-17 08:39:56.817615] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190fcdd0 00:43:23.685 [2024-04-17 08:39:56.818138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.685 [2024-04-17 08:39:56.818164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:43:23.686 [2024-04-17 08:39:56.827436] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190fbcf0 00:43:23.686 [2024-04-17 08:39:56.828306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.686 [2024-04-17 08:39:56.828339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:43:23.686 [2024-04-17 08:39:56.838159] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f20d8 00:43:23.686 [2024-04-17 08:39:56.838725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.686 [2024-04-17 08:39:56.838756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:43:23.686 [2024-04-17 08:39:56.847682] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190eea00 00:43:23.686 [2024-04-17 08:39:56.848311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.686 [2024-04-17 08:39:56.848344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:43:23.686 [2024-04-17 08:39:56.857485] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e6b70 00:43:23.686 [2024-04-17 08:39:56.858312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.686 [2024-04-17 08:39:56.858345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:43:23.686 [2024-04-17 08:39:56.868397] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190ef6a8 00:43:23.686 [2024-04-17 08:39:56.868992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.686 [2024-04-17 08:39:56.869021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:43:23.686 [2024-04-17 08:39:56.878826] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f0ff8 00:43:23.686 [2024-04-17 08:39:56.879559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.686 [2024-04-17 08:39:56.879594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:43:23.686 [2024-04-17 08:39:56.889335] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190efae0 00:43:23.686 [2024-04-17 08:39:56.890252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.686 [2024-04-17 08:39:56.890290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:43:23.686 [2024-04-17 08:39:56.898088] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190fda78 00:43:23.686 [2024-04-17 08:39:56.899088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.686 [2024-04-17 08:39:56.899123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:43:23.686 [2024-04-17 08:39:56.908271] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f35f0 00:43:23.686 [2024-04-17 08:39:56.908721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.686 [2024-04-17 08:39:56.908752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:43:23.686 [2024-04-17 08:39:56.918584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f2948 00:43:23.686 [2024-04-17 08:39:56.919065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.686 [2024-04-17 08:39:56.919089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:43:23.686 [2024-04-17 08:39:56.929274] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e9e10 00:43:23.686 [2024-04-17 08:39:56.930316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.686 [2024-04-17 08:39:56.930350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:43:23.686 [2024-04-17 08:39:56.939456] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f57b0 00:43:23.686 [2024-04-17 08:39:56.940168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.686 [2024-04-17 08:39:56.940204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:43:23.686 [2024-04-17 08:39:56.949832] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190fb8b8 00:43:23.686 [2024-04-17 08:39:56.950745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.686 [2024-04-17 08:39:56.950782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:43:23.686 [2024-04-17 08:39:56.960430] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e8088 00:43:23.686 [2024-04-17 08:39:56.961162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.686 [2024-04-17 08:39:56.961200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:43:23.686 [2024-04-17 08:39:56.971012] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f2948 00:43:23.686 [2024-04-17 08:39:56.971747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.686 [2024-04-17 08:39:56.971788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:43:23.686 [2024-04-17 08:39:56.981511] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190fe720 00:43:23.686 [2024-04-17 08:39:56.982289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.686 [2024-04-17 08:39:56.982324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:43:23.686 [2024-04-17 08:39:56.992839] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e0a68 00:43:23.686 [2024-04-17 08:39:56.993585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.686 [2024-04-17 08:39:56.993618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:43:23.686 [2024-04-17 08:39:57.001941] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f9f68 00:43:23.686 [2024-04-17 08:39:57.002743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.686 [2024-04-17 08:39:57.002777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:43:23.686 [2024-04-17 08:39:57.012812] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e8d30 00:43:23.686 [2024-04-17 08:39:57.013907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.686 [2024-04-17 08:39:57.013939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:43:23.944 [2024-04-17 08:39:57.023194] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e3d08 00:43:23.944 [2024-04-17 08:39:57.023778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.944 [2024-04-17 08:39:57.023806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:43:23.944 [2024-04-17 08:39:57.036057] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190de038 00:43:23.944 [2024-04-17 08:39:57.037297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.944 [2024-04-17 08:39:57.037330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:43:23.945 [2024-04-17 08:39:57.043908] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190fd208 00:43:23.945 [2024-04-17 08:39:57.044240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.945 [2024-04-17 08:39:57.044265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:43:23.945 [2024-04-17 08:39:57.056634] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f6020 00:43:23.945 [2024-04-17 08:39:57.058207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.945 [2024-04-17 08:39:57.058240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:23.945 [2024-04-17 08:39:57.066840] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190ec840 00:43:23.945 [2024-04-17 08:39:57.067607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.945 [2024-04-17 08:39:57.067638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:43:23.945 [2024-04-17 08:39:57.076751] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190df550 00:43:23.945 [2024-04-17 08:39:57.077513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.945 [2024-04-17 08:39:57.077543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:43:23.945 [2024-04-17 08:39:57.085534] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e6fa8 00:43:23.945 [2024-04-17 08:39:57.086404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.945 [2024-04-17 08:39:57.086444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:43:23.945 [2024-04-17 08:39:57.097385] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f7970 00:43:23.945 [2024-04-17 08:39:57.098485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.945 [2024-04-17 08:39:57.098519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:43:23.945 [2024-04-17 08:39:57.104877] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f0788 00:43:23.945 [2024-04-17 08:39:57.105090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.945 [2024-04-17 08:39:57.105106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:43:23.945 [2024-04-17 08:39:57.116035] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190fbcf0 00:43:23.945 [2024-04-17 08:39:57.116389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.945 [2024-04-17 08:39:57.116409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:23.945 [2024-04-17 08:39:57.125904] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f5be8 00:43:23.945 [2024-04-17 08:39:57.127195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.945 [2024-04-17 08:39:57.127229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:23.945 [2024-04-17 08:39:57.135626] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f46d0 00:43:23.945 [2024-04-17 08:39:57.136006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.945 [2024-04-17 08:39:57.136028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:43:23.945 [2024-04-17 08:39:57.146266] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190ed920 00:43:23.945 [2024-04-17 08:39:57.147234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:18823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.945 [2024-04-17 08:39:57.147266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:43:23.945 [2024-04-17 08:39:57.156108] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e1b48 00:43:23.945 [2024-04-17 08:39:57.156717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.945 [2024-04-17 08:39:57.156748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:43:23.945 [2024-04-17 08:39:57.165989] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190feb58 00:43:23.945 [2024-04-17 08:39:57.166554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.945 [2024-04-17 08:39:57.166578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:43:23.945 [2024-04-17 08:39:57.175881] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f46d0 00:43:23.945 [2024-04-17 08:39:57.176457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.945 [2024-04-17 08:39:57.176486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:43:23.945 [2024-04-17 08:39:57.185634] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f46d0 00:43:23.945 [2024-04-17 08:39:57.186229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.945 [2024-04-17 08:39:57.186260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:43:23.945 [2024-04-17 08:39:57.195586] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190feb58 00:43:23.945 [2024-04-17 08:39:57.196144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.945 [2024-04-17 08:39:57.196177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:23.945 [2024-04-17 08:39:57.206149] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e4de8 00:43:23.945 [2024-04-17 08:39:57.206807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.945 [2024-04-17 08:39:57.206850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:23.945 [2024-04-17 08:39:57.215974] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190de8a8 00:43:23.945 [2024-04-17 08:39:57.216579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.945 [2024-04-17 08:39:57.216610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:43:23.945 [2024-04-17 08:39:57.225909] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f6cc8 00:43:23.945 [2024-04-17 08:39:57.226489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.945 [2024-04-17 08:39:57.226519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:43:23.945 [2024-04-17 08:39:57.235766] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190ddc00 00:43:23.945 [2024-04-17 08:39:57.236303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.945 [2024-04-17 08:39:57.236347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:43:23.945 [2024-04-17 08:39:57.245768] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f4f40 00:43:23.945 [2024-04-17 08:39:57.246282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.945 [2024-04-17 08:39:57.246306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:43:23.945 [2024-04-17 08:39:57.255038] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e23b8 00:43:23.945 [2024-04-17 08:39:57.256010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.945 [2024-04-17 08:39:57.256042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:43:23.945 [2024-04-17 08:39:57.265872] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190eaab8 00:43:23.945 [2024-04-17 08:39:57.266761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:23.945 [2024-04-17 08:39:57.266794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:43:24.204 [2024-04-17 08:39:57.275877] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e99d8 00:43:24.204 [2024-04-17 08:39:57.276391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.204 [2024-04-17 08:39:57.276437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:43:24.204 [2024-04-17 08:39:57.285535] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e38d0 00:43:24.204 [2024-04-17 08:39:57.286096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.204 [2024-04-17 08:39:57.286127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:43:24.204 [2024-04-17 08:39:57.295360] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190fda78 00:43:24.204 [2024-04-17 08:39:57.295923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.204 [2024-04-17 08:39:57.295964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:43:24.204 [2024-04-17 08:39:57.305154] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e6738 00:43:24.204 [2024-04-17 08:39:57.305665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.204 [2024-04-17 08:39:57.305697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:43:24.204 [2024-04-17 08:39:57.315081] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190ef6a8 00:43:24.204 [2024-04-17 08:39:57.315588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.204 [2024-04-17 08:39:57.315614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:43:24.204 [2024-04-17 08:39:57.324966] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e23b8 00:43:24.204 [2024-04-17 08:39:57.325928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.204 [2024-04-17 08:39:57.325968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:43:24.204 [2024-04-17 08:39:57.334786] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190eaef0 00:43:24.204 [2024-04-17 08:39:57.336184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.204 [2024-04-17 08:39:57.336218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:43:24.204 [2024-04-17 08:39:57.345640] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f6890 00:43:24.204 [2024-04-17 08:39:57.347174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.204 [2024-04-17 08:39:57.347218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:43:24.204 [2024-04-17 08:39:57.355839] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f46d0 00:43:24.204 [2024-04-17 08:39:57.357168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.204 [2024-04-17 08:39:57.357201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:43:24.204 [2024-04-17 08:39:57.366035] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190ed920 00:43:24.204 [2024-04-17 08:39:57.367371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.204 [2024-04-17 08:39:57.367413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:43:24.204 [2024-04-17 08:39:57.376046] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190fe2e8 00:43:24.204 [2024-04-17 08:39:57.377647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.204 [2024-04-17 08:39:57.377679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:43:24.204 [2024-04-17 08:39:57.386190] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e12d8 00:43:24.204 [2024-04-17 08:39:57.387719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.204 [2024-04-17 08:39:57.387751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:43:24.204 [2024-04-17 08:39:57.397121] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e9e10 00:43:24.204 [2024-04-17 08:39:57.398187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.204 [2024-04-17 08:39:57.398229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:43:24.204 [2024-04-17 08:39:57.404708] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190de8a8 00:43:24.204 [2024-04-17 08:39:57.404850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.204 [2024-04-17 08:39:57.404875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:43:24.204 [2024-04-17 08:39:57.417288] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f20d8 00:43:24.204 [2024-04-17 08:39:57.418121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:8137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.204 [2024-04-17 08:39:57.418162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:43:24.204 [2024-04-17 08:39:57.427784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e95a0 00:43:24.204 [2024-04-17 08:39:57.428529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.204 [2024-04-17 08:39:57.428569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:43:24.204 [2024-04-17 08:39:57.437768] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e3060 00:43:24.204 [2024-04-17 08:39:57.439311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.204 [2024-04-17 08:39:57.439350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:43:24.204 [2024-04-17 08:39:57.448703] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e1f80 00:43:24.204 [2024-04-17 08:39:57.450362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.204 [2024-04-17 08:39:57.450420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:24.204 [2024-04-17 08:39:57.458051] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190ef6a8 00:43:24.204 [2024-04-17 08:39:57.459167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.204 [2024-04-17 08:39:57.459206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:43:24.204 [2024-04-17 08:39:57.468710] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190ed0b0 00:43:24.204 [2024-04-17 08:39:57.469703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.204 [2024-04-17 08:39:57.469746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:24.204 [2024-04-17 08:39:57.478813] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190ddc00 00:43:24.204 [2024-04-17 08:39:57.479754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.204 [2024-04-17 08:39:57.479791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:24.204 [2024-04-17 08:39:57.487762] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190df118 00:43:24.204 [2024-04-17 08:39:57.488572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.204 [2024-04-17 08:39:57.488604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:43:24.204 [2024-04-17 08:39:57.497501] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f8a50 00:43:24.204 [2024-04-17 08:39:57.498163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.204 [2024-04-17 08:39:57.498207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:43:24.204 [2024-04-17 08:39:57.507635] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e84c0 00:43:24.204 [2024-04-17 08:39:57.508596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.204 [2024-04-17 08:39:57.508632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:43:24.204 [2024-04-17 08:39:57.517798] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e6738 00:43:24.204 [2024-04-17 08:39:57.518669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.205 [2024-04-17 08:39:57.518701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:43:24.205 [2024-04-17 08:39:57.528014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190eea00 00:43:24.205 [2024-04-17 08:39:57.528924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.205 [2024-04-17 08:39:57.528960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:43:24.463 [2024-04-17 08:39:57.539915] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190ec408 00:43:24.463 [2024-04-17 08:39:57.540653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.463 [2024-04-17 08:39:57.540688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:24.463 [2024-04-17 08:39:57.549451] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190fa7d8 00:43:24.463 [2024-04-17 08:39:57.550112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.463 [2024-04-17 08:39:57.550143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:43:24.463 [2024-04-17 08:39:57.559053] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f9b30 00:43:24.463 [2024-04-17 08:39:57.559763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.463 [2024-04-17 08:39:57.559797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:43:24.463 [2024-04-17 08:39:57.568895] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190eb760 00:43:24.463 [2024-04-17 08:39:57.569510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.463 [2024-04-17 08:39:57.569551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:43:24.463 [2024-04-17 08:39:57.578795] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190ea248 00:43:24.463 [2024-04-17 08:39:57.579457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.463 [2024-04-17 08:39:57.579491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:43:24.463 [2024-04-17 08:39:57.589035] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e2c28 00:43:24.463 [2024-04-17 08:39:57.589743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.463 [2024-04-17 08:39:57.589786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:43:24.463 [2024-04-17 08:39:57.599545] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f3e60 00:43:24.463 [2024-04-17 08:39:57.600312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.463 [2024-04-17 08:39:57.600347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:43:24.463 [2024-04-17 08:39:57.610485] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e49b0 00:43:24.463 [2024-04-17 08:39:57.611089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.463 [2024-04-17 08:39:57.611117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:43:24.463 [2024-04-17 08:39:57.619898] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e23b8 00:43:24.463 [2024-04-17 08:39:57.621335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.463 [2024-04-17 08:39:57.621372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:43:24.463 [2024-04-17 08:39:57.630915] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f5be8 00:43:24.463 [2024-04-17 08:39:57.631842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.463 [2024-04-17 08:39:57.631873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:43:24.463 [2024-04-17 08:39:57.640520] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f5be8 00:43:24.463 [2024-04-17 08:39:57.641630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.463 [2024-04-17 08:39:57.641662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:43:24.463 [2024-04-17 08:39:57.650509] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f3e60 00:43:24.463 [2024-04-17 08:39:57.651588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.463 [2024-04-17 08:39:57.651618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:43:24.463 [2024-04-17 08:39:57.660222] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f7da8 00:43:24.463 [2024-04-17 08:39:57.661230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.463 [2024-04-17 08:39:57.661261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:43:24.463 [2024-04-17 08:39:57.670044] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f4f40 00:43:24.463 [2024-04-17 08:39:57.671087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.463 [2024-04-17 08:39:57.671119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:43:24.463 [2024-04-17 08:39:57.680163] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f31b8 00:43:24.463 [2024-04-17 08:39:57.681187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.463 [2024-04-17 08:39:57.681218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:43:24.463 [2024-04-17 08:39:57.690377] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190eb328 00:43:24.463 [2024-04-17 08:39:57.691421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:9027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.463 [2024-04-17 08:39:57.691450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:43:24.463 [2024-04-17 08:39:57.700277] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190ebfd0 00:43:24.463 [2024-04-17 08:39:57.701272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.463 [2024-04-17 08:39:57.701303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:43:24.463 [2024-04-17 08:39:57.709968] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e5658 00:43:24.463 [2024-04-17 08:39:57.710920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.463 [2024-04-17 08:39:57.710952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:43:24.463 [2024-04-17 08:39:57.720325] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f9b30 00:43:24.463 [2024-04-17 08:39:57.721087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.463 [2024-04-17 08:39:57.721121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:43:24.463 [2024-04-17 08:39:57.730558] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e3060 00:43:24.464 [2024-04-17 08:39:57.731258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.464 [2024-04-17 08:39:57.731290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:43:24.464 [2024-04-17 08:39:57.742644] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190ed0b0 00:43:24.464 [2024-04-17 08:39:57.743336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.464 [2024-04-17 08:39:57.743365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:43:24.464 [2024-04-17 08:39:57.751401] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f3e60 00:43:24.464 [2024-04-17 08:39:57.752130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.464 [2024-04-17 08:39:57.752162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:43:24.464 [2024-04-17 08:39:57.761386] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f8a50 00:43:24.464 [2024-04-17 08:39:57.762171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.464 [2024-04-17 08:39:57.762206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:43:24.464 [2024-04-17 08:39:57.772741] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190ea680 00:43:24.464 [2024-04-17 08:39:57.773474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.464 [2024-04-17 08:39:57.773505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:43:24.464 [2024-04-17 08:39:57.782514] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f3a28 00:43:24.464 [2024-04-17 08:39:57.783643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.464 [2024-04-17 08:39:57.783673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:43:24.464 [2024-04-17 08:39:57.792399] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e0630 00:43:24.464 [2024-04-17 08:39:57.793485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.464 [2024-04-17 08:39:57.793516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:43:24.723 [2024-04-17 08:39:57.802643] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190eea00 00:43:24.723 [2024-04-17 08:39:57.802937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.723 [2024-04-17 08:39:57.802955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:43:24.723 [2024-04-17 08:39:57.812476] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190fa7d8 00:43:24.723 [2024-04-17 08:39:57.812975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.723 [2024-04-17 08:39:57.813005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:43:24.723 [2024-04-17 08:39:57.821652] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190fc998 00:43:24.723 [2024-04-17 08:39:57.822164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.723 [2024-04-17 08:39:57.822189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:43:24.723 [2024-04-17 08:39:57.831481] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e8088 00:43:24.723 [2024-04-17 08:39:57.832612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.723 [2024-04-17 08:39:57.832644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:43:24.723 [2024-04-17 08:39:57.841504] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190fc998 00:43:24.723 [2024-04-17 08:39:57.842754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.723 [2024-04-17 08:39:57.842787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:43:24.723 [2024-04-17 08:39:57.851241] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190fc998 00:43:24.723 [2024-04-17 08:39:57.852508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.723 [2024-04-17 08:39:57.852538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:43:24.723 [2024-04-17 08:39:57.862221] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e8088 00:43:24.723 [2024-04-17 08:39:57.863964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.723 [2024-04-17 08:39:57.863996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:43:24.723 [2024-04-17 08:39:57.872666] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f0ff8 00:43:24.723 [2024-04-17 08:39:57.874162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.723 [2024-04-17 08:39:57.874196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:43:24.723 [2024-04-17 08:39:57.882051] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190eb328 00:43:24.723 [2024-04-17 08:39:57.883355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.723 [2024-04-17 08:39:57.883388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:43:24.723 [2024-04-17 08:39:57.892474] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190ef270 00:43:24.723 [2024-04-17 08:39:57.893082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.723 [2024-04-17 08:39:57.893115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:43:24.723 [2024-04-17 08:39:57.902914] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f1868 00:43:24.723 [2024-04-17 08:39:57.903487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.723 [2024-04-17 08:39:57.903516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:43:24.723 [2024-04-17 08:39:57.912837] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190ec840 00:43:24.723 [2024-04-17 08:39:57.913757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.723 [2024-04-17 08:39:57.913791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:43:24.723 [2024-04-17 08:39:57.924146] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190ec840 00:43:24.723 [2024-04-17 08:39:57.925060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.723 [2024-04-17 08:39:57.925102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:43:24.723 [2024-04-17 08:39:57.933459] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f92c0 00:43:24.723 [2024-04-17 08:39:57.934803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.723 [2024-04-17 08:39:57.934838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:43:24.723 [2024-04-17 08:39:57.943801] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190ee5c8 00:43:24.723 [2024-04-17 08:39:57.944461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.723 [2024-04-17 08:39:57.944491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:24.723 [2024-04-17 08:39:57.953382] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f20d8 00:43:24.723 [2024-04-17 08:39:57.954524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.723 [2024-04-17 08:39:57.954560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:43:24.723 [2024-04-17 08:39:57.963682] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f0350 00:43:24.723 [2024-04-17 08:39:57.964058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.723 [2024-04-17 08:39:57.964079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:43:24.723 [2024-04-17 08:39:57.976996] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190fa3a0 00:43:24.723 [2024-04-17 08:39:57.978054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.723 [2024-04-17 08:39:57.978097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:43:24.723 [2024-04-17 08:39:57.984899] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190fc998 00:43:24.723 [2024-04-17 08:39:57.985049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.723 [2024-04-17 08:39:57.985080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:43:24.723 [2024-04-17 08:39:57.996634] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e6738 00:43:24.723 [2024-04-17 08:39:57.996940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.724 [2024-04-17 08:39:57.996971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:43:24.724 [2024-04-17 08:39:58.007062] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f6020 00:43:24.724 [2024-04-17 08:39:58.007345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.724 [2024-04-17 08:39:58.007372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:43:24.724 [2024-04-17 08:39:58.017088] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190fc998 00:43:24.724 [2024-04-17 08:39:58.018433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.724 [2024-04-17 08:39:58.018475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:43:24.724 [2024-04-17 08:39:58.029245] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e99d8 00:43:24.724 [2024-04-17 08:39:58.030045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.724 [2024-04-17 08:39:58.030085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:43:24.724 [2024-04-17 08:39:58.039811] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190ecc78 00:43:24.724 [2024-04-17 08:39:58.041462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.724 [2024-04-17 08:39:58.041500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:43:24.724 [2024-04-17 08:39:58.050214] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190fc998 00:43:24.724 [2024-04-17 08:39:58.051063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.724 [2024-04-17 08:39:58.051102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:43:24.986 [2024-04-17 08:39:58.060462] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190ebfd0 00:43:24.986 [2024-04-17 08:39:58.061967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.986 [2024-04-17 08:39:58.062005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:43:24.986 [2024-04-17 08:39:58.071422] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e01f8 00:43:24.986 [2024-04-17 08:39:58.072294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.986 [2024-04-17 08:39:58.072330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:43:24.986 [2024-04-17 08:39:58.081823] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e6738 00:43:24.986 [2024-04-17 08:39:58.083321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.986 [2024-04-17 08:39:58.083355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:43:24.986 [2024-04-17 08:39:58.092311] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e3498 00:43:24.986 [2024-04-17 08:39:58.093295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.986 [2024-04-17 08:39:58.093327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:43:24.986 [2024-04-17 08:39:58.101803] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e88f8 00:43:24.986 [2024-04-17 08:39:58.102797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.986 [2024-04-17 08:39:58.102828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:43:24.986 [2024-04-17 08:39:58.111796] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190ea248 00:43:24.986 [2024-04-17 08:39:58.112460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.986 [2024-04-17 08:39:58.112487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:43:24.986 [2024-04-17 08:39:58.121717] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190ef6a8 00:43:24.986 [2024-04-17 08:39:58.122354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.986 [2024-04-17 08:39:58.122385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:43:24.986 [2024-04-17 08:39:58.131643] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f5be8 00:43:24.986 [2024-04-17 08:39:58.132277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.986 [2024-04-17 08:39:58.132310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:43:24.986 [2024-04-17 08:39:58.141415] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e3498 00:43:24.986 [2024-04-17 08:39:58.142030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.986 [2024-04-17 08:39:58.142061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:43:24.986 [2024-04-17 08:39:58.151264] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190fe2e8 00:43:24.986 [2024-04-17 08:39:58.151950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.986 [2024-04-17 08:39:58.151983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:43:24.986 [2024-04-17 08:39:58.161008] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e1b48 00:43:24.986 [2024-04-17 08:39:58.161651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.986 [2024-04-17 08:39:58.161684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:43:24.986 [2024-04-17 08:39:58.170425] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190fc998 00:43:24.986 [2024-04-17 08:39:58.171284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.986 [2024-04-17 08:39:58.171315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:43:24.986 [2024-04-17 08:39:58.180023] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f0ff8 00:43:24.986 [2024-04-17 08:39:58.181297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.986 [2024-04-17 08:39:58.181326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:43:24.986 [2024-04-17 08:39:58.189076] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190ea680 00:43:24.986 [2024-04-17 08:39:58.190553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.986 [2024-04-17 08:39:58.190584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:43:24.986 [2024-04-17 08:39:58.199688] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f6890 00:43:24.986 [2024-04-17 08:39:58.200260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.986 [2024-04-17 08:39:58.200292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:24.986 [2024-04-17 08:39:58.208712] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e73e0 00:43:24.986 [2024-04-17 08:39:58.209348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.986 [2024-04-17 08:39:58.209387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:43:24.986 [2024-04-17 08:39:58.217428] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e84c0 00:43:24.986 [2024-04-17 08:39:58.218451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.986 [2024-04-17 08:39:58.218477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:43:24.986 [2024-04-17 08:39:58.227662] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190ebfd0 00:43:24.986 [2024-04-17 08:39:58.228599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:10403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.986 [2024-04-17 08:39:58.228628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:43:24.986 [2024-04-17 08:39:58.234671] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f4b08 00:43:24.986 [2024-04-17 08:39:58.234749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.986 [2024-04-17 08:39:58.234767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:43:24.986 [2024-04-17 08:39:58.246064] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f35f0 00:43:24.986 [2024-04-17 08:39:58.246748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.986 [2024-04-17 08:39:58.246778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:43:24.986 [2024-04-17 08:39:58.255683] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f2510 00:43:24.986 [2024-04-17 08:39:58.256403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.986 [2024-04-17 08:39:58.256449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:43:24.986 [2024-04-17 08:39:58.264325] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f31b8 00:43:24.986 [2024-04-17 08:39:58.265282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.986 [2024-04-17 08:39:58.265312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:43:24.986 [2024-04-17 08:39:58.272995] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190fe2e8 00:43:24.986 [2024-04-17 08:39:58.273801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.986 [2024-04-17 08:39:58.273830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:43:24.986 [2024-04-17 08:39:58.283804] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190ddc00 00:43:24.986 [2024-04-17 08:39:58.285407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.986 [2024-04-17 08:39:58.285436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:43:24.986 [2024-04-17 08:39:58.292379] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f2948 00:43:24.986 [2024-04-17 08:39:58.293517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.987 [2024-04-17 08:39:58.293547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:43:24.987 [2024-04-17 08:39:58.302094] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e5220 00:43:24.987 [2024-04-17 08:39:58.302564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.987 [2024-04-17 08:39:58.302584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:43:24.987 [2024-04-17 08:39:58.311695] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e73e0 00:43:24.987 [2024-04-17 08:39:58.312168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:24.987 [2024-04-17 08:39:58.312195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:25.249 [2024-04-17 08:39:58.320198] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e9e10 00:43:25.249 [2024-04-17 08:39:58.320255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:25.250 [2024-04-17 08:39:58.320271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:43:25.250 [2024-04-17 08:39:58.331108] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f1868 00:43:25.250 [2024-04-17 08:39:58.332305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:25.250 [2024-04-17 08:39:58.332336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:43:25.250 [2024-04-17 08:39:58.340304] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190ec840 00:43:25.250 [2024-04-17 08:39:58.340852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:25.250 [2024-04-17 08:39:58.340883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:43:25.250 [2024-04-17 08:39:58.349422] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e1f80 00:43:25.250 [2024-04-17 08:39:58.349880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:25.250 [2024-04-17 08:39:58.349901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:43:25.250 [2024-04-17 08:39:58.358501] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190eb328 00:43:25.250 [2024-04-17 08:39:58.358992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:25.250 [2024-04-17 08:39:58.359022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:25.250 [2024-04-17 08:39:58.367031] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e6fa8 00:43:25.250 [2024-04-17 08:39:58.367785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:25.250 [2024-04-17 08:39:58.367815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:25.250 [2024-04-17 08:39:58.376762] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190de038 00:43:25.250 [2024-04-17 08:39:58.377070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:25.250 [2024-04-17 08:39:58.377086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:43:25.250 [2024-04-17 08:39:58.388297] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190ec840 00:43:25.250 [2024-04-17 08:39:58.389271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:25.250 [2024-04-17 08:39:58.389303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:43:25.250 [2024-04-17 08:39:58.396524] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190fdeb0 00:43:25.250 [2024-04-17 08:39:58.397591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:25.250 [2024-04-17 08:39:58.397622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:43:25.250 [2024-04-17 08:39:58.406184] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f2948 00:43:25.250 [2024-04-17 08:39:58.406770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:25.250 [2024-04-17 08:39:58.406804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:43:25.250 [2024-04-17 08:39:58.416644] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f8a50 00:43:25.250 [2024-04-17 08:39:58.417282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:25.250 [2024-04-17 08:39:58.417314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:43:25.250 [2024-04-17 08:39:58.426107] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e7c50 00:43:25.250 [2024-04-17 08:39:58.427734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:25.250 [2024-04-17 08:39:58.427765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:43:25.250 [2024-04-17 08:39:58.434747] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190ec408 00:43:25.250 [2024-04-17 08:39:58.435832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:25.250 [2024-04-17 08:39:58.435863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:43:25.250 [2024-04-17 08:39:58.444609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e84c0 00:43:25.250 [2024-04-17 08:39:58.444985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:25.250 [2024-04-17 08:39:58.445006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:43:25.250 [2024-04-17 08:39:58.454922] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190e7818 00:43:25.250 [2024-04-17 08:39:58.455510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:25.250 [2024-04-17 08:39:58.455561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:43:25.250 [2024-04-17 08:39:58.465416] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190ea680 00:43:25.250 [2024-04-17 08:39:58.467072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:25.250 [2024-04-17 08:39:58.467108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:43:25.250 [2024-04-17 08:39:58.475280] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190f35f0 00:43:25.250 [2024-04-17 08:39:58.476746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:25.250 [2024-04-17 08:39:58.476778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:43:25.250 [2024-04-17 08:39:58.484240] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190ff3c8 00:43:25.250 [2024-04-17 08:39:58.485229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:25.250 [2024-04-17 08:39:58.485319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:43:25.250 [2024-04-17 08:39:58.494145] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9a10) with pdu=0x2000190eff18 00:43:25.250 [2024-04-17 08:39:58.494842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:25.250 [2024-04-17 08:39:58.494926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:43:25.250 00:43:25.250 Latency(us) 00:43:25.250 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:25.250 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:43:25.250 nvme0n1 : 2.00 25270.92 98.71 0.00 0.00 5060.33 1824.42 13965.75 00:43:25.250 =================================================================================================================== 00:43:25.250 Total : 25270.92 98.71 0.00 0.00 5060.33 1824.42 13965.75 00:43:25.250 0 00:43:25.250 08:39:58 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:43:25.250 08:39:58 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:43:25.250 08:39:58 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:43:25.250 | .driver_specific 00:43:25.250 | .nvme_error 00:43:25.250 | .status_code 00:43:25.250 | .command_transient_transport_error' 00:43:25.250 08:39:58 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:43:25.509 08:39:58 -- host/digest.sh@71 -- # (( 198 > 0 )) 00:43:25.509 08:39:58 -- host/digest.sh@73 -- # killprocess 85210 00:43:25.509 08:39:58 -- common/autotest_common.sh@926 -- # '[' -z 85210 ']' 00:43:25.509 08:39:58 -- common/autotest_common.sh@930 -- # kill -0 85210 00:43:25.509 08:39:58 -- common/autotest_common.sh@931 -- # uname 00:43:25.509 08:39:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:43:25.509 08:39:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85210 00:43:25.509 killing process with pid 85210 00:43:25.509 Received shutdown signal, test time was about 2.000000 seconds 00:43:25.509 00:43:25.509 Latency(us) 00:43:25.509 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:25.509 =================================================================================================================== 00:43:25.509 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:25.509 08:39:58 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:43:25.509 08:39:58 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:43:25.509 08:39:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85210' 00:43:25.509 08:39:58 -- common/autotest_common.sh@945 -- # kill 85210 00:43:25.509 08:39:58 -- common/autotest_common.sh@950 -- # wait 85210 00:43:25.768 08:39:59 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:43:25.768 08:39:59 -- host/digest.sh@54 -- # local rw bs qd 00:43:25.768 08:39:59 -- host/digest.sh@56 -- # rw=randwrite 00:43:25.768 08:39:59 -- host/digest.sh@56 -- # bs=131072 00:43:25.768 08:39:59 -- host/digest.sh@56 -- # qd=16 00:43:25.768 08:39:59 -- host/digest.sh@58 -- # bperfpid=85294 00:43:25.768 08:39:59 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:43:25.768 08:39:59 -- host/digest.sh@60 -- # waitforlisten 85294 /var/tmp/bperf.sock 00:43:25.768 08:39:59 -- common/autotest_common.sh@819 -- # '[' -z 85294 ']' 00:43:25.768 08:39:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:25.768 08:39:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:43:25.768 08:39:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:25.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:25.768 08:39:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:43:25.768 08:39:59 -- common/autotest_common.sh@10 -- # set +x 00:43:25.768 [2024-04-17 08:39:59.063565] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:43:25.768 [2024-04-17 08:39:59.063768] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:43:25.768 Zero copy mechanism will not be used. 00:43:25.768 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85294 ] 00:43:26.026 [2024-04-17 08:39:59.210180] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:26.026 [2024-04-17 08:39:59.315739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:43:26.956 08:39:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:43:26.956 08:39:59 -- common/autotest_common.sh@852 -- # return 0 00:43:26.956 08:39:59 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:43:26.956 08:39:59 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:43:26.956 08:40:00 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:43:26.956 08:40:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:26.956 08:40:00 -- common/autotest_common.sh@10 -- # set +x 00:43:26.956 08:40:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:26.956 08:40:00 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:43:26.956 08:40:00 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:43:27.214 nvme0n1 00:43:27.214 08:40:00 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:43:27.214 08:40:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:27.214 08:40:00 -- common/autotest_common.sh@10 -- # set +x 00:43:27.214 08:40:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:27.214 08:40:00 -- host/digest.sh@69 -- # bperf_py perform_tests 00:43:27.214 08:40:00 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:27.474 I/O size of 131072 is greater than zero copy threshold (65536). 00:43:27.474 Zero copy mechanism will not be used. 00:43:27.474 Running I/O for 2 seconds... 00:43:27.474 [2024-04-17 08:40:00.642823] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.474 [2024-04-17 08:40:00.643385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.474 [2024-04-17 08:40:00.643499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:27.474 [2024-04-17 08:40:00.647008] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.474 [2024-04-17 08:40:00.647320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.474 [2024-04-17 08:40:00.647431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:27.474 [2024-04-17 08:40:00.650435] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.474 [2024-04-17 08:40:00.650790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.474 [2024-04-17 08:40:00.650822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:27.474 [2024-04-17 08:40:00.653901] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.474 [2024-04-17 08:40:00.654083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.474 [2024-04-17 08:40:00.654110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:27.474 [2024-04-17 08:40:00.657306] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.474 [2024-04-17 08:40:00.657425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.474 [2024-04-17 08:40:00.657447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:27.474 [2024-04-17 08:40:00.660674] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.474 [2024-04-17 08:40:00.660769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.474 [2024-04-17 08:40:00.660791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:27.474 [2024-04-17 08:40:00.664193] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.474 [2024-04-17 08:40:00.664473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.474 [2024-04-17 08:40:00.664508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:27.474 [2024-04-17 08:40:00.667645] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.474 [2024-04-17 08:40:00.667766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.474 [2024-04-17 08:40:00.667785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:27.474 [2024-04-17 08:40:00.671164] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.474 [2024-04-17 08:40:00.671378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.474 [2024-04-17 08:40:00.671416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:27.474 [2024-04-17 08:40:00.674594] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.474 [2024-04-17 08:40:00.674843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.474 [2024-04-17 08:40:00.674877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:27.474 [2024-04-17 08:40:00.678030] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.474 [2024-04-17 08:40:00.678140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.474 [2024-04-17 08:40:00.678158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:27.474 [2024-04-17 08:40:00.681620] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.474 [2024-04-17 08:40:00.681883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.474 [2024-04-17 08:40:00.681918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:27.474 [2024-04-17 08:40:00.684923] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.475 [2024-04-17 08:40:00.685100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.475 [2024-04-17 08:40:00.685126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:27.475 [2024-04-17 08:40:00.688464] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.475 [2024-04-17 08:40:00.688648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.475 [2024-04-17 08:40:00.688672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:27.475 [2024-04-17 08:40:00.691877] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.475 [2024-04-17 08:40:00.692068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.475 [2024-04-17 08:40:00.692092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:27.475 [2024-04-17 08:40:00.695181] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.475 [2024-04-17 08:40:00.695278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.475 [2024-04-17 08:40:00.695297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:27.475 [2024-04-17 08:40:00.698750] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.475 [2024-04-17 08:40:00.698956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.475 [2024-04-17 08:40:00.698983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:27.475 [2024-04-17 08:40:00.702132] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.475 [2024-04-17 08:40:00.702371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.475 [2024-04-17 08:40:00.702413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:27.475 [2024-04-17 08:40:00.705567] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.475 [2024-04-17 08:40:00.705720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.475 [2024-04-17 08:40:00.705745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:27.475 [2024-04-17 08:40:00.709001] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.475 [2024-04-17 08:40:00.709333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.475 [2024-04-17 08:40:00.709368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:27.475 [2024-04-17 08:40:00.712495] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.475 [2024-04-17 08:40:00.712638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.475 [2024-04-17 08:40:00.712664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:27.475 [2024-04-17 08:40:00.716025] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.475 [2024-04-17 08:40:00.716215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.475 [2024-04-17 08:40:00.716239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:27.475 [2024-04-17 08:40:00.719385] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.475 [2024-04-17 08:40:00.719616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.475 [2024-04-17 08:40:00.719650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:27.475 [2024-04-17 08:40:00.722637] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.475 [2024-04-17 08:40:00.722777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.475 [2024-04-17 08:40:00.722801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:27.475 [2024-04-17 08:40:00.726116] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.475 [2024-04-17 08:40:00.726367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.475 [2024-04-17 08:40:00.726410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:27.475 [2024-04-17 08:40:00.729436] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.475 [2024-04-17 08:40:00.729570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.475 [2024-04-17 08:40:00.729597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:27.475 [2024-04-17 08:40:00.732931] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.475 [2024-04-17 08:40:00.733094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.475 [2024-04-17 08:40:00.733114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:27.475 [2024-04-17 08:40:00.736501] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.475 [2024-04-17 08:40:00.736760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.475 [2024-04-17 08:40:00.736811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:27.475 [2024-04-17 08:40:00.740038] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.475 [2024-04-17 08:40:00.740172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.475 [2024-04-17 08:40:00.740192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:27.475 [2024-04-17 08:40:00.743572] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.475 [2024-04-17 08:40:00.743770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.475 [2024-04-17 08:40:00.743796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:27.475 [2024-04-17 08:40:00.747036] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.475 [2024-04-17 08:40:00.747197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.475 [2024-04-17 08:40:00.747233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:27.475 [2024-04-17 08:40:00.750541] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.475 [2024-04-17 08:40:00.750688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.475 [2024-04-17 08:40:00.750715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:27.475 [2024-04-17 08:40:00.754055] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.475 [2024-04-17 08:40:00.754247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.475 [2024-04-17 08:40:00.754282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:27.475 [2024-04-17 08:40:00.757482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.475 [2024-04-17 08:40:00.757617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.475 [2024-04-17 08:40:00.757640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:27.475 [2024-04-17 08:40:00.760999] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.475 [2024-04-17 08:40:00.761201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.475 [2024-04-17 08:40:00.761225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:27.475 [2024-04-17 08:40:00.764527] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.475 [2024-04-17 08:40:00.764670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.475 [2024-04-17 08:40:00.764695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:27.475 [2024-04-17 08:40:00.767977] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.475 [2024-04-17 08:40:00.768069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.475 [2024-04-17 08:40:00.768088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:27.475 [2024-04-17 08:40:00.771511] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.475 [2024-04-17 08:40:00.771703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.475 [2024-04-17 08:40:00.771727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:27.475 [2024-04-17 08:40:00.775000] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.475 [2024-04-17 08:40:00.775150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.475 [2024-04-17 08:40:00.775176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:27.475 [2024-04-17 08:40:00.778489] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.476 [2024-04-17 08:40:00.778627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.476 [2024-04-17 08:40:00.778653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:27.476 [2024-04-17 08:40:00.782054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.476 [2024-04-17 08:40:00.782238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.476 [2024-04-17 08:40:00.782265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:27.476 [2024-04-17 08:40:00.785422] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.476 [2024-04-17 08:40:00.785584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.476 [2024-04-17 08:40:00.785608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:27.476 [2024-04-17 08:40:00.788971] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.476 [2024-04-17 08:40:00.789158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.476 [2024-04-17 08:40:00.789191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:27.476 [2024-04-17 08:40:00.792458] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.476 [2024-04-17 08:40:00.792647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.476 [2024-04-17 08:40:00.792675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:27.476 [2024-04-17 08:40:00.795875] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.476 [2024-04-17 08:40:00.795993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.476 [2024-04-17 08:40:00.796018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:27.476 [2024-04-17 08:40:00.799406] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.476 [2024-04-17 08:40:00.799635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.476 [2024-04-17 08:40:00.799660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:27.476 [2024-04-17 08:40:00.802887] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.476 [2024-04-17 08:40:00.803004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.476 [2024-04-17 08:40:00.803024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:27.737 [2024-04-17 08:40:00.806424] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.737 [2024-04-17 08:40:00.806639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.737 [2024-04-17 08:40:00.806690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:27.737 [2024-04-17 08:40:00.809908] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.737 [2024-04-17 08:40:00.810104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.737 [2024-04-17 08:40:00.810139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:27.737 [2024-04-17 08:40:00.813348] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.737 [2024-04-17 08:40:00.813509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.737 [2024-04-17 08:40:00.813537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:27.737 [2024-04-17 08:40:00.816910] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.737 [2024-04-17 08:40:00.817120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.737 [2024-04-17 08:40:00.817144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:27.737 [2024-04-17 08:40:00.820265] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.737 [2024-04-17 08:40:00.820405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.737 [2024-04-17 08:40:00.820428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:27.737 [2024-04-17 08:40:00.823751] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.737 [2024-04-17 08:40:00.823951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.737 [2024-04-17 08:40:00.823974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:27.737 [2024-04-17 08:40:00.827206] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.737 [2024-04-17 08:40:00.827357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.737 [2024-04-17 08:40:00.827383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:27.737 [2024-04-17 08:40:00.830551] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.737 [2024-04-17 08:40:00.830698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.737 [2024-04-17 08:40:00.830717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:27.737 [2024-04-17 08:40:00.833847] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.737 [2024-04-17 08:40:00.834064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.737 [2024-04-17 08:40:00.834088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:27.737 [2024-04-17 08:40:00.837079] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.738 [2024-04-17 08:40:00.837236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.738 [2024-04-17 08:40:00.837263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:27.738 [2024-04-17 08:40:00.840320] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.738 [2024-04-17 08:40:00.840439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.738 [2024-04-17 08:40:00.840458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:27.738 [2024-04-17 08:40:00.843537] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.738 [2024-04-17 08:40:00.843707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.738 [2024-04-17 08:40:00.843730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:27.738 [2024-04-17 08:40:00.846744] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.738 [2024-04-17 08:40:00.846884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.738 [2024-04-17 08:40:00.846908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:27.738 [2024-04-17 08:40:00.850149] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.738 [2024-04-17 08:40:00.850365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.738 [2024-04-17 08:40:00.850402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:27.738 [2024-04-17 08:40:00.853575] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.738 [2024-04-17 08:40:00.853742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.738 [2024-04-17 08:40:00.853758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:27.738 [2024-04-17 08:40:00.856931] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.738 [2024-04-17 08:40:00.857085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.738 [2024-04-17 08:40:00.857111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:27.738 [2024-04-17 08:40:00.860187] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.738 [2024-04-17 08:40:00.860352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.738 [2024-04-17 08:40:00.860375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:27.738 [2024-04-17 08:40:00.863327] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.738 [2024-04-17 08:40:00.863559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.738 [2024-04-17 08:40:00.863576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:27.738 [2024-04-17 08:40:00.866452] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.738 [2024-04-17 08:40:00.866645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.738 [2024-04-17 08:40:00.866667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:27.738 [2024-04-17 08:40:00.869524] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.738 [2024-04-17 08:40:00.869691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.738 [2024-04-17 08:40:00.869708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:27.738 [2024-04-17 08:40:00.872628] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.738 [2024-04-17 08:40:00.872757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.738 [2024-04-17 08:40:00.872774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:27.738 [2024-04-17 08:40:00.875763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.738 [2024-04-17 08:40:00.875918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.738 [2024-04-17 08:40:00.875941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:27.738 [2024-04-17 08:40:00.878977] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.738 [2024-04-17 08:40:00.879079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.738 [2024-04-17 08:40:00.879114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:27.738 [2024-04-17 08:40:00.882566] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.738 [2024-04-17 08:40:00.882769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.738 [2024-04-17 08:40:00.882787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:27.738 [2024-04-17 08:40:00.885926] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.738 [2024-04-17 08:40:00.886117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.738 [2024-04-17 08:40:00.886135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:27.738 [2024-04-17 08:40:00.889190] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.738 [2024-04-17 08:40:00.889303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.738 [2024-04-17 08:40:00.889321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:27.738 [2024-04-17 08:40:00.892581] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.738 [2024-04-17 08:40:00.892764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.738 [2024-04-17 08:40:00.892789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:27.738 [2024-04-17 08:40:00.896031] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.738 [2024-04-17 08:40:00.896130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.738 [2024-04-17 08:40:00.896148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:27.738 [2024-04-17 08:40:00.899247] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.738 [2024-04-17 08:40:00.899465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.738 [2024-04-17 08:40:00.899483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:27.738 [2024-04-17 08:40:00.902609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.738 [2024-04-17 08:40:00.902775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.738 [2024-04-17 08:40:00.902794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:27.738 [2024-04-17 08:40:00.905898] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.738 [2024-04-17 08:40:00.906054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.738 [2024-04-17 08:40:00.906072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:27.738 [2024-04-17 08:40:00.909351] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.738 [2024-04-17 08:40:00.909568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.738 [2024-04-17 08:40:00.909587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:27.738 [2024-04-17 08:40:00.912707] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.738 [2024-04-17 08:40:00.912816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.738 [2024-04-17 08:40:00.912835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:27.738 [2024-04-17 08:40:00.916146] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.738 [2024-04-17 08:40:00.916337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.738 [2024-04-17 08:40:00.916361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:27.738 [2024-04-17 08:40:00.919750] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.738 [2024-04-17 08:40:00.919904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.738 [2024-04-17 08:40:00.919924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:27.738 [2024-04-17 08:40:00.923132] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.738 [2024-04-17 08:40:00.923289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.738 [2024-04-17 08:40:00.923309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:27.738 [2024-04-17 08:40:00.926728] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.738 [2024-04-17 08:40:00.926923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.739 [2024-04-17 08:40:00.926943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:27.739 [2024-04-17 08:40:00.930218] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.739 [2024-04-17 08:40:00.930426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.739 [2024-04-17 08:40:00.930446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:27.739 [2024-04-17 08:40:00.933727] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.739 [2024-04-17 08:40:00.933922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.739 [2024-04-17 08:40:00.933948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:27.739 [2024-04-17 08:40:00.937124] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.739 [2024-04-17 08:40:00.937319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.739 [2024-04-17 08:40:00.937344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:27.739 [2024-04-17 08:40:00.940618] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.739 [2024-04-17 08:40:00.940795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.739 [2024-04-17 08:40:00.940820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:27.739 [2024-04-17 08:40:00.944116] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.739 [2024-04-17 08:40:00.944306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.739 [2024-04-17 08:40:00.944325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:27.739 [2024-04-17 08:40:00.947655] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.739 [2024-04-17 08:40:00.947834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.739 [2024-04-17 08:40:00.947940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:27.739 [2024-04-17 08:40:00.951218] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.739 [2024-04-17 08:40:00.951407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.739 [2024-04-17 08:40:00.951440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:27.739 [2024-04-17 08:40:00.954808] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.739 [2024-04-17 08:40:00.954978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.739 [2024-04-17 08:40:00.954997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:27.739 [2024-04-17 08:40:00.958336] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.739 [2024-04-17 08:40:00.958495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.739 [2024-04-17 08:40:00.958513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:27.739 [2024-04-17 08:40:00.961780] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.739 [2024-04-17 08:40:00.961971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.739 [2024-04-17 08:40:00.961990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:27.739 [2024-04-17 08:40:00.965143] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.739 [2024-04-17 08:40:00.965302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.739 [2024-04-17 08:40:00.965320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:27.739 [2024-04-17 08:40:00.968674] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.739 [2024-04-17 08:40:00.968854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.739 [2024-04-17 08:40:00.968872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:27.739 [2024-04-17 08:40:00.972067] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.739 [2024-04-17 08:40:00.972244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.739 [2024-04-17 08:40:00.972264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:27.739 [2024-04-17 08:40:00.975426] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.739 [2024-04-17 08:40:00.975581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.739 [2024-04-17 08:40:00.975599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:27.739 [2024-04-17 08:40:00.978913] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.739 [2024-04-17 08:40:00.979096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.739 [2024-04-17 08:40:00.979115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:27.739 [2024-04-17 08:40:00.982424] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.739 [2024-04-17 08:40:00.982652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.739 [2024-04-17 08:40:00.982671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:27.739 [2024-04-17 08:40:00.985853] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.739 [2024-04-17 08:40:00.986068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.739 [2024-04-17 08:40:00.986087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:27.739 [2024-04-17 08:40:00.989307] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.739 [2024-04-17 08:40:00.989496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.739 [2024-04-17 08:40:00.989515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:27.739 [2024-04-17 08:40:00.992686] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.739 [2024-04-17 08:40:00.992851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.739 [2024-04-17 08:40:00.992871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:27.739 [2024-04-17 08:40:00.996149] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.739 [2024-04-17 08:40:00.996330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.739 [2024-04-17 08:40:00.996355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:27.739 [2024-04-17 08:40:00.999584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.739 [2024-04-17 08:40:00.999737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.739 [2024-04-17 08:40:00.999763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:27.739 [2024-04-17 08:40:01.003199] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.739 [2024-04-17 08:40:01.003403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.739 [2024-04-17 08:40:01.003434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:27.739 [2024-04-17 08:40:01.006626] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.739 [2024-04-17 08:40:01.006782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.739 [2024-04-17 08:40:01.006807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:27.739 [2024-04-17 08:40:01.010119] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.739 [2024-04-17 08:40:01.010299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.739 [2024-04-17 08:40:01.010321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:27.739 [2024-04-17 08:40:01.013607] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.739 [2024-04-17 08:40:01.013916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.739 [2024-04-17 08:40:01.014032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:27.739 [2024-04-17 08:40:01.016937] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.739 [2024-04-17 08:40:01.017049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.739 [2024-04-17 08:40:01.017070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:27.739 [2024-04-17 08:40:01.020191] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.739 [2024-04-17 08:40:01.020337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.739 [2024-04-17 08:40:01.020357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:27.740 [2024-04-17 08:40:01.023574] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.740 [2024-04-17 08:40:01.023738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.740 [2024-04-17 08:40:01.023758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:27.740 [2024-04-17 08:40:01.026836] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.740 [2024-04-17 08:40:01.026928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.740 [2024-04-17 08:40:01.026949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:27.740 [2024-04-17 08:40:01.030103] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.740 [2024-04-17 08:40:01.030271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.740 [2024-04-17 08:40:01.030291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:27.740 [2024-04-17 08:40:01.033210] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.740 [2024-04-17 08:40:01.033429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.740 [2024-04-17 08:40:01.033450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:27.740 [2024-04-17 08:40:01.036885] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.740 [2024-04-17 08:40:01.037283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.740 [2024-04-17 08:40:01.037313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:27.740 [2024-04-17 08:40:01.040488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.740 [2024-04-17 08:40:01.040739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.740 [2024-04-17 08:40:01.040856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:27.740 [2024-04-17 08:40:01.044050] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.740 [2024-04-17 08:40:01.044192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.740 [2024-04-17 08:40:01.044358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:27.740 [2024-04-17 08:40:01.047495] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.740 [2024-04-17 08:40:01.047720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.740 [2024-04-17 08:40:01.047754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:27.740 [2024-04-17 08:40:01.051121] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.740 [2024-04-17 08:40:01.051309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.740 [2024-04-17 08:40:01.051340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:27.740 [2024-04-17 08:40:01.054670] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.740 [2024-04-17 08:40:01.054869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.740 [2024-04-17 08:40:01.054908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:27.740 [2024-04-17 08:40:01.058257] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.740 [2024-04-17 08:40:01.058487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.740 [2024-04-17 08:40:01.058519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:27.740 [2024-04-17 08:40:01.061789] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.740 [2024-04-17 08:40:01.061940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.740 [2024-04-17 08:40:01.061980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:27.740 [2024-04-17 08:40:01.065365] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:27.740 [2024-04-17 08:40:01.065484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:27.740 [2024-04-17 08:40:01.065509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.003 [2024-04-17 08:40:01.068990] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.003 [2024-04-17 08:40:01.069208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.003 [2024-04-17 08:40:01.069232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.003 [2024-04-17 08:40:01.072465] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.003 [2024-04-17 08:40:01.072633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.003 [2024-04-17 08:40:01.072657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.003 [2024-04-17 08:40:01.076004] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.003 [2024-04-17 08:40:01.076155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.003 [2024-04-17 08:40:01.076177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.003 [2024-04-17 08:40:01.079605] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.003 [2024-04-17 08:40:01.079800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.003 [2024-04-17 08:40:01.079824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.003 [2024-04-17 08:40:01.083119] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.003 [2024-04-17 08:40:01.083334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.003 [2024-04-17 08:40:01.083356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.003 [2024-04-17 08:40:01.086678] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.003 [2024-04-17 08:40:01.086890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.003 [2024-04-17 08:40:01.086912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.003 [2024-04-17 08:40:01.090089] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.003 [2024-04-17 08:40:01.090237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.003 [2024-04-17 08:40:01.090259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.003 [2024-04-17 08:40:01.093490] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.003 [2024-04-17 08:40:01.093634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.003 [2024-04-17 08:40:01.093655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.003 [2024-04-17 08:40:01.096904] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.003 [2024-04-17 08:40:01.097117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.003 [2024-04-17 08:40:01.097138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.003 [2024-04-17 08:40:01.100380] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.003 [2024-04-17 08:40:01.100549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.003 [2024-04-17 08:40:01.100576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.004 [2024-04-17 08:40:01.103835] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.004 [2024-04-17 08:40:01.104055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.004 [2024-04-17 08:40:01.104081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.004 [2024-04-17 08:40:01.107287] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.004 [2024-04-17 08:40:01.107462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.004 [2024-04-17 08:40:01.107484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.004 [2024-04-17 08:40:01.110972] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.004 [2024-04-17 08:40:01.111119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.004 [2024-04-17 08:40:01.111142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.004 [2024-04-17 08:40:01.114456] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.004 [2024-04-17 08:40:01.114735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.004 [2024-04-17 08:40:01.114861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.004 [2024-04-17 08:40:01.118024] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.004 [2024-04-17 08:40:01.118205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.004 [2024-04-17 08:40:01.118335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.004 [2024-04-17 08:40:01.121583] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.004 [2024-04-17 08:40:01.121685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.004 [2024-04-17 08:40:01.121706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.004 [2024-04-17 08:40:01.125101] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.004 [2024-04-17 08:40:01.125184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.004 [2024-04-17 08:40:01.125205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.004 [2024-04-17 08:40:01.128733] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.004 [2024-04-17 08:40:01.128907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.004 [2024-04-17 08:40:01.128928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.004 [2024-04-17 08:40:01.132279] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.004 [2024-04-17 08:40:01.132545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.004 [2024-04-17 08:40:01.132572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.004 [2024-04-17 08:40:01.135830] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.004 [2024-04-17 08:40:01.135965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.004 [2024-04-17 08:40:01.135984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.004 [2024-04-17 08:40:01.139342] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.004 [2024-04-17 08:40:01.139475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.004 [2024-04-17 08:40:01.139497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.004 [2024-04-17 08:40:01.142854] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.004 [2024-04-17 08:40:01.143042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.004 [2024-04-17 08:40:01.143070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.004 [2024-04-17 08:40:01.146344] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.004 [2024-04-17 08:40:01.146556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.004 [2024-04-17 08:40:01.146578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.004 [2024-04-17 08:40:01.149853] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.004 [2024-04-17 08:40:01.150006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.004 [2024-04-17 08:40:01.150025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.004 [2024-04-17 08:40:01.153307] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.004 [2024-04-17 08:40:01.153538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.004 [2024-04-17 08:40:01.153663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.004 [2024-04-17 08:40:01.156683] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.004 [2024-04-17 08:40:01.156835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.004 [2024-04-17 08:40:01.156971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.004 [2024-04-17 08:40:01.160059] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.004 [2024-04-17 08:40:01.160210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.004 [2024-04-17 08:40:01.160317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.004 [2024-04-17 08:40:01.163433] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.004 [2024-04-17 08:40:01.163600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.004 [2024-04-17 08:40:01.163617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.004 [2024-04-17 08:40:01.166499] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.004 [2024-04-17 08:40:01.166682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.004 [2024-04-17 08:40:01.166706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.004 [2024-04-17 08:40:01.169774] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.004 [2024-04-17 08:40:01.169973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.004 [2024-04-17 08:40:01.169993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.004 [2024-04-17 08:40:01.173096] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.005 [2024-04-17 08:40:01.173241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.005 [2024-04-17 08:40:01.173261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.005 [2024-04-17 08:40:01.176382] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.005 [2024-04-17 08:40:01.176576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.005 [2024-04-17 08:40:01.176599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.005 [2024-04-17 08:40:01.179585] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.005 [2024-04-17 08:40:01.179725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.005 [2024-04-17 08:40:01.179743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.005 [2024-04-17 08:40:01.182948] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.005 [2024-04-17 08:40:01.183136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.005 [2024-04-17 08:40:01.183156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.005 [2024-04-17 08:40:01.186081] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.005 [2024-04-17 08:40:01.186239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.005 [2024-04-17 08:40:01.186257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.005 [2024-04-17 08:40:01.189340] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.005 [2024-04-17 08:40:01.189498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.005 [2024-04-17 08:40:01.189517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.005 [2024-04-17 08:40:01.192749] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.005 [2024-04-17 08:40:01.192929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.005 [2024-04-17 08:40:01.192946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.005 [2024-04-17 08:40:01.196025] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.005 [2024-04-17 08:40:01.196175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.005 [2024-04-17 08:40:01.196191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.005 [2024-04-17 08:40:01.199293] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.005 [2024-04-17 08:40:01.199512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.005 [2024-04-17 08:40:01.199529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.005 [2024-04-17 08:40:01.202644] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.005 [2024-04-17 08:40:01.202829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.005 [2024-04-17 08:40:01.202854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.005 [2024-04-17 08:40:01.205929] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.005 [2024-04-17 08:40:01.206082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.005 [2024-04-17 08:40:01.206101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.005 [2024-04-17 08:40:01.209469] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.005 [2024-04-17 08:40:01.209669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.005 [2024-04-17 08:40:01.209687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.005 [2024-04-17 08:40:01.212887] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.005 [2024-04-17 08:40:01.213035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.005 [2024-04-17 08:40:01.213053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.005 [2024-04-17 08:40:01.216472] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.005 [2024-04-17 08:40:01.216665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.005 [2024-04-17 08:40:01.216689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.005 [2024-04-17 08:40:01.220065] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.005 [2024-04-17 08:40:01.220198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.005 [2024-04-17 08:40:01.220219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.005 [2024-04-17 08:40:01.223607] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.005 [2024-04-17 08:40:01.223722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.005 [2024-04-17 08:40:01.223743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.005 [2024-04-17 08:40:01.226996] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.005 [2024-04-17 08:40:01.227229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.005 [2024-04-17 08:40:01.227249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.005 [2024-04-17 08:40:01.230621] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.005 [2024-04-17 08:40:01.230789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.005 [2024-04-17 08:40:01.230896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.005 [2024-04-17 08:40:01.234233] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.005 [2024-04-17 08:40:01.234423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.006 [2024-04-17 08:40:01.234518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.006 [2024-04-17 08:40:01.237809] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.006 [2024-04-17 08:40:01.237931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.006 [2024-04-17 08:40:01.237952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.006 [2024-04-17 08:40:01.241401] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.006 [2024-04-17 08:40:01.241594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.006 [2024-04-17 08:40:01.241615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.006 [2024-04-17 08:40:01.244828] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.006 [2024-04-17 08:40:01.244985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.006 [2024-04-17 08:40:01.245003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.006 [2024-04-17 08:40:01.248310] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.006 [2024-04-17 08:40:01.248420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.006 [2024-04-17 08:40:01.248456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.006 [2024-04-17 08:40:01.251886] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.006 [2024-04-17 08:40:01.252089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.006 [2024-04-17 08:40:01.252109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.006 [2024-04-17 08:40:01.255320] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.006 [2024-04-17 08:40:01.255481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.006 [2024-04-17 08:40:01.255500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.006 [2024-04-17 08:40:01.258678] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.006 [2024-04-17 08:40:01.258813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.006 [2024-04-17 08:40:01.258832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.006 [2024-04-17 08:40:01.261975] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.006 [2024-04-17 08:40:01.262171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.006 [2024-04-17 08:40:01.262189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.006 [2024-04-17 08:40:01.265149] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.006 [2024-04-17 08:40:01.265350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.006 [2024-04-17 08:40:01.265389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.006 [2024-04-17 08:40:01.268252] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.006 [2024-04-17 08:40:01.268452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.006 [2024-04-17 08:40:01.268469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.006 [2024-04-17 08:40:01.271389] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.006 [2024-04-17 08:40:01.271573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.006 [2024-04-17 08:40:01.271598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.006 [2024-04-17 08:40:01.274672] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.006 [2024-04-17 08:40:01.274811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.006 [2024-04-17 08:40:01.274828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.006 [2024-04-17 08:40:01.278073] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.006 [2024-04-17 08:40:01.278254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.006 [2024-04-17 08:40:01.278272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.006 [2024-04-17 08:40:01.281295] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.006 [2024-04-17 08:40:01.281542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.006 [2024-04-17 08:40:01.281560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.006 [2024-04-17 08:40:01.284583] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.006 [2024-04-17 08:40:01.284686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.006 [2024-04-17 08:40:01.284705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.006 [2024-04-17 08:40:01.288138] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.006 [2024-04-17 08:40:01.288298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.006 [2024-04-17 08:40:01.288318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.006 [2024-04-17 08:40:01.291617] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.006 [2024-04-17 08:40:01.291782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.006 [2024-04-17 08:40:01.291802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.006 [2024-04-17 08:40:01.295111] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.006 [2024-04-17 08:40:01.295297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.006 [2024-04-17 08:40:01.295316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.006 [2024-04-17 08:40:01.298553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.006 [2024-04-17 08:40:01.298715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.006 [2024-04-17 08:40:01.298735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.006 [2024-04-17 08:40:01.302105] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.006 [2024-04-17 08:40:01.302326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.006 [2024-04-17 08:40:01.302345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.006 [2024-04-17 08:40:01.305660] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.006 [2024-04-17 08:40:01.305821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.006 [2024-04-17 08:40:01.305843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.006 [2024-04-17 08:40:01.309035] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.006 [2024-04-17 08:40:01.309129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.006 [2024-04-17 08:40:01.309149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.006 [2024-04-17 08:40:01.312563] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.006 [2024-04-17 08:40:01.312779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.006 [2024-04-17 08:40:01.312799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.006 [2024-04-17 08:40:01.315947] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.006 [2024-04-17 08:40:01.316130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.006 [2024-04-17 08:40:01.316148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.007 [2024-04-17 08:40:01.319396] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.007 [2024-04-17 08:40:01.319554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.007 [2024-04-17 08:40:01.319572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.007 [2024-04-17 08:40:01.322797] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.007 [2024-04-17 08:40:01.322983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.007 [2024-04-17 08:40:01.323001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.007 [2024-04-17 08:40:01.326255] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.007 [2024-04-17 08:40:01.326430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.007 [2024-04-17 08:40:01.326450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.007 [2024-04-17 08:40:01.329715] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.007 [2024-04-17 08:40:01.329937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.007 [2024-04-17 08:40:01.329969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.268 [2024-04-17 08:40:01.333035] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.268 [2024-04-17 08:40:01.333175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.268 [2024-04-17 08:40:01.333192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.268 [2024-04-17 08:40:01.336212] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.268 [2024-04-17 08:40:01.336373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.268 [2024-04-17 08:40:01.336391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.268 [2024-04-17 08:40:01.339668] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.268 [2024-04-17 08:40:01.339850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.268 [2024-04-17 08:40:01.339870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.268 [2024-04-17 08:40:01.343015] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.268 [2024-04-17 08:40:01.343193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.268 [2024-04-17 08:40:01.343212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.268 [2024-04-17 08:40:01.346488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.268 [2024-04-17 08:40:01.346689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.268 [2024-04-17 08:40:01.346708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.268 [2024-04-17 08:40:01.349778] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.268 [2024-04-17 08:40:01.349947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.268 [2024-04-17 08:40:01.349979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.268 [2024-04-17 08:40:01.353153] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.268 [2024-04-17 08:40:01.353300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.268 [2024-04-17 08:40:01.353319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.268 [2024-04-17 08:40:01.356641] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.268 [2024-04-17 08:40:01.356842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.268 [2024-04-17 08:40:01.356862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.268 [2024-04-17 08:40:01.360002] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.268 [2024-04-17 08:40:01.360165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.268 [2024-04-17 08:40:01.360182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.268 [2024-04-17 08:40:01.363524] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.268 [2024-04-17 08:40:01.363727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.268 [2024-04-17 08:40:01.363746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.269 [2024-04-17 08:40:01.366820] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.269 [2024-04-17 08:40:01.366986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.269 [2024-04-17 08:40:01.367005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.269 [2024-04-17 08:40:01.370366] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.269 [2024-04-17 08:40:01.370574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.269 [2024-04-17 08:40:01.370618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.269 [2024-04-17 08:40:01.373798] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.269 [2024-04-17 08:40:01.374014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.269 [2024-04-17 08:40:01.374034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.269 [2024-04-17 08:40:01.377158] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.269 [2024-04-17 08:40:01.377378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.269 [2024-04-17 08:40:01.377396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.269 [2024-04-17 08:40:01.380661] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.269 [2024-04-17 08:40:01.380849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.269 [2024-04-17 08:40:01.380869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.269 [2024-04-17 08:40:01.384060] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.269 [2024-04-17 08:40:01.384211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.269 [2024-04-17 08:40:01.384230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.269 [2024-04-17 08:40:01.387499] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.269 [2024-04-17 08:40:01.387643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.269 [2024-04-17 08:40:01.387663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.269 [2024-04-17 08:40:01.391062] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.269 [2024-04-17 08:40:01.391257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.269 [2024-04-17 08:40:01.391284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.269 [2024-04-17 08:40:01.394553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.269 [2024-04-17 08:40:01.394753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.269 [2024-04-17 08:40:01.394772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.269 [2024-04-17 08:40:01.397924] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.269 [2024-04-17 08:40:01.398155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.269 [2024-04-17 08:40:01.398174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.269 [2024-04-17 08:40:01.401320] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.269 [2024-04-17 08:40:01.401507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.269 [2024-04-17 08:40:01.401526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.269 [2024-04-17 08:40:01.404683] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.269 [2024-04-17 08:40:01.404773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.269 [2024-04-17 08:40:01.404791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.269 [2024-04-17 08:40:01.408136] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.269 [2024-04-17 08:40:01.408323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.269 [2024-04-17 08:40:01.408341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.269 [2024-04-17 08:40:01.411632] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.269 [2024-04-17 08:40:01.411803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.269 [2024-04-17 08:40:01.411822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.269 [2024-04-17 08:40:01.415127] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.269 [2024-04-17 08:40:01.415276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.270 [2024-04-17 08:40:01.415294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.270 [2024-04-17 08:40:01.418694] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.270 [2024-04-17 08:40:01.418871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.270 [2024-04-17 08:40:01.418889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.270 [2024-04-17 08:40:01.422035] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.270 [2024-04-17 08:40:01.422176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.270 [2024-04-17 08:40:01.422195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.270 [2024-04-17 08:40:01.425536] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.270 [2024-04-17 08:40:01.425754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.270 [2024-04-17 08:40:01.425773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.270 [2024-04-17 08:40:01.428973] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.270 [2024-04-17 08:40:01.429118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.270 [2024-04-17 08:40:01.429136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.270 [2024-04-17 08:40:01.432397] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.270 [2024-04-17 08:40:01.432594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.270 [2024-04-17 08:40:01.432613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.270 [2024-04-17 08:40:01.435908] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.270 [2024-04-17 08:40:01.436126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.270 [2024-04-17 08:40:01.436143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.270 [2024-04-17 08:40:01.439366] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.270 [2024-04-17 08:40:01.439561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.270 [2024-04-17 08:40:01.439585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.270 [2024-04-17 08:40:01.442792] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.270 [2024-04-17 08:40:01.442945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.270 [2024-04-17 08:40:01.442963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.270 [2024-04-17 08:40:01.446233] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.270 [2024-04-17 08:40:01.446387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.270 [2024-04-17 08:40:01.446419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.270 [2024-04-17 08:40:01.449612] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.270 [2024-04-17 08:40:01.449732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.270 [2024-04-17 08:40:01.449749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.270 [2024-04-17 08:40:01.453054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.270 [2024-04-17 08:40:01.453251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.270 [2024-04-17 08:40:01.453270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.270 [2024-04-17 08:40:01.456502] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.270 [2024-04-17 08:40:01.456663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.270 [2024-04-17 08:40:01.456682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.270 [2024-04-17 08:40:01.459999] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.270 [2024-04-17 08:40:01.460106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.270 [2024-04-17 08:40:01.460125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.270 [2024-04-17 08:40:01.463506] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.270 [2024-04-17 08:40:01.463723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.270 [2024-04-17 08:40:01.463748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.270 [2024-04-17 08:40:01.466863] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.270 [2024-04-17 08:40:01.467027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.270 [2024-04-17 08:40:01.467046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.270 [2024-04-17 08:40:01.470273] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.270 [2024-04-17 08:40:01.470426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.270 [2024-04-17 08:40:01.470445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.270 [2024-04-17 08:40:01.473717] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.270 [2024-04-17 08:40:01.473916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.270 [2024-04-17 08:40:01.473936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.270 [2024-04-17 08:40:01.477144] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.270 [2024-04-17 08:40:01.477303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.270 [2024-04-17 08:40:01.477330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.270 [2024-04-17 08:40:01.480698] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.270 [2024-04-17 08:40:01.480897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.270 [2024-04-17 08:40:01.480917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.270 [2024-04-17 08:40:01.484163] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.270 [2024-04-17 08:40:01.484347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.270 [2024-04-17 08:40:01.484367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.270 [2024-04-17 08:40:01.487504] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.270 [2024-04-17 08:40:01.487616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.270 [2024-04-17 08:40:01.487634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.270 [2024-04-17 08:40:01.490978] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.270 [2024-04-17 08:40:01.491166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.270 [2024-04-17 08:40:01.491185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.270 [2024-04-17 08:40:01.494342] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.270 [2024-04-17 08:40:01.494542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.270 [2024-04-17 08:40:01.494561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.270 [2024-04-17 08:40:01.497485] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.270 [2024-04-17 08:40:01.497646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.270 [2024-04-17 08:40:01.497663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.270 [2024-04-17 08:40:01.500817] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.270 [2024-04-17 08:40:01.501003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.270 [2024-04-17 08:40:01.501021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.270 [2024-04-17 08:40:01.504069] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.270 [2024-04-17 08:40:01.504254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.270 [2024-04-17 08:40:01.504272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.271 [2024-04-17 08:40:01.507430] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.271 [2024-04-17 08:40:01.507666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.271 [2024-04-17 08:40:01.507691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.271 [2024-04-17 08:40:01.510881] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.271 [2024-04-17 08:40:01.511063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.271 [2024-04-17 08:40:01.511081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.271 [2024-04-17 08:40:01.514280] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.271 [2024-04-17 08:40:01.514424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.271 [2024-04-17 08:40:01.514442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.271 [2024-04-17 08:40:01.517671] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.271 [2024-04-17 08:40:01.517865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.271 [2024-04-17 08:40:01.517883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.271 [2024-04-17 08:40:01.520998] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.271 [2024-04-17 08:40:01.521195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.271 [2024-04-17 08:40:01.521211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.271 [2024-04-17 08:40:01.524222] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.271 [2024-04-17 08:40:01.524408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.271 [2024-04-17 08:40:01.524438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.271 [2024-04-17 08:40:01.527429] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.271 [2024-04-17 08:40:01.527588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.271 [2024-04-17 08:40:01.527612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.271 [2024-04-17 08:40:01.530680] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.271 [2024-04-17 08:40:01.530851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.271 [2024-04-17 08:40:01.530868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.271 [2024-04-17 08:40:01.533721] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.271 [2024-04-17 08:40:01.533872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.271 [2024-04-17 08:40:01.533891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.271 [2024-04-17 08:40:01.536853] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.271 [2024-04-17 08:40:01.536985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.271 [2024-04-17 08:40:01.537001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.271 [2024-04-17 08:40:01.540208] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.271 [2024-04-17 08:40:01.540465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.271 [2024-04-17 08:40:01.540485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.271 [2024-04-17 08:40:01.543740] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.271 [2024-04-17 08:40:01.543910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.271 [2024-04-17 08:40:01.543937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.271 [2024-04-17 08:40:01.547205] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.271 [2024-04-17 08:40:01.547379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.271 [2024-04-17 08:40:01.547397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.271 [2024-04-17 08:40:01.550657] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.271 [2024-04-17 08:40:01.550807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.271 [2024-04-17 08:40:01.550825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.271 [2024-04-17 08:40:01.554117] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.271 [2024-04-17 08:40:01.554282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.271 [2024-04-17 08:40:01.554302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.271 [2024-04-17 08:40:01.557610] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.271 [2024-04-17 08:40:01.557818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.271 [2024-04-17 08:40:01.557837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.271 [2024-04-17 08:40:01.561068] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.271 [2024-04-17 08:40:01.561413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.271 [2024-04-17 08:40:01.561455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.271 [2024-04-17 08:40:01.564534] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.271 [2024-04-17 08:40:01.564682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.271 [2024-04-17 08:40:01.564702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.271 [2024-04-17 08:40:01.567937] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.271 [2024-04-17 08:40:01.568051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.271 [2024-04-17 08:40:01.568070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.271 [2024-04-17 08:40:01.571395] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.271 [2024-04-17 08:40:01.571554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.271 [2024-04-17 08:40:01.571572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.271 [2024-04-17 08:40:01.574799] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.271 [2024-04-17 08:40:01.574898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.271 [2024-04-17 08:40:01.574919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.271 [2024-04-17 08:40:01.578090] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.271 [2024-04-17 08:40:01.578242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.271 [2024-04-17 08:40:01.578260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.271 [2024-04-17 08:40:01.581390] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.271 [2024-04-17 08:40:01.581534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.271 [2024-04-17 08:40:01.581552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.271 [2024-04-17 08:40:01.584880] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.271 [2024-04-17 08:40:01.585058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.271 [2024-04-17 08:40:01.585077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.271 [2024-04-17 08:40:01.588245] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.271 [2024-04-17 08:40:01.588469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.271 [2024-04-17 08:40:01.588488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.271 [2024-04-17 08:40:01.591644] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.271 [2024-04-17 08:40:01.591820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.272 [2024-04-17 08:40:01.591838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.272 [2024-04-17 08:40:01.594951] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.272 [2024-04-17 08:40:01.595102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.272 [2024-04-17 08:40:01.595122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.532 [2024-04-17 08:40:01.598299] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.532 [2024-04-17 08:40:01.598476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.532 [2024-04-17 08:40:01.598495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.532 [2024-04-17 08:40:01.601665] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.532 [2024-04-17 08:40:01.601836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.532 [2024-04-17 08:40:01.601855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.532 [2024-04-17 08:40:01.604997] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.532 [2024-04-17 08:40:01.605130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.532 [2024-04-17 08:40:01.605150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.532 [2024-04-17 08:40:01.608291] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.532 [2024-04-17 08:40:01.608485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.532 [2024-04-17 08:40:01.608504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.532 [2024-04-17 08:40:01.611549] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.532 [2024-04-17 08:40:01.611740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.532 [2024-04-17 08:40:01.611759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.532 [2024-04-17 08:40:01.614936] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.532 [2024-04-17 08:40:01.615048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.532 [2024-04-17 08:40:01.615066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.532 [2024-04-17 08:40:01.618416] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.533 [2024-04-17 08:40:01.618582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.533 [2024-04-17 08:40:01.618602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.533 [2024-04-17 08:40:01.621706] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.533 [2024-04-17 08:40:01.621814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.533 [2024-04-17 08:40:01.621831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.533 [2024-04-17 08:40:01.625148] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.533 [2024-04-17 08:40:01.625216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.533 [2024-04-17 08:40:01.625234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.533 [2024-04-17 08:40:01.628613] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.533 [2024-04-17 08:40:01.628780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.533 [2024-04-17 08:40:01.628798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.533 [2024-04-17 08:40:01.632164] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.533 [2024-04-17 08:40:01.632441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.533 [2024-04-17 08:40:01.632462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.533 [2024-04-17 08:40:01.635700] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.533 [2024-04-17 08:40:01.635921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.533 [2024-04-17 08:40:01.635939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.533 [2024-04-17 08:40:01.639136] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.533 [2024-04-17 08:40:01.639324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.533 [2024-04-17 08:40:01.639342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.533 [2024-04-17 08:40:01.642691] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.533 [2024-04-17 08:40:01.642845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.533 [2024-04-17 08:40:01.642870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.533 [2024-04-17 08:40:01.646110] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.533 [2024-04-17 08:40:01.646292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.533 [2024-04-17 08:40:01.646311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.533 [2024-04-17 08:40:01.649507] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.533 [2024-04-17 08:40:01.649666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.533 [2024-04-17 08:40:01.649686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.533 [2024-04-17 08:40:01.652915] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.533 [2024-04-17 08:40:01.653135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.533 [2024-04-17 08:40:01.653160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.533 [2024-04-17 08:40:01.656238] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.533 [2024-04-17 08:40:01.656427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.533 [2024-04-17 08:40:01.656450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.533 [2024-04-17 08:40:01.659702] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.533 [2024-04-17 08:40:01.659830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.533 [2024-04-17 08:40:01.659847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.533 [2024-04-17 08:40:01.663071] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.533 [2024-04-17 08:40:01.663269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.533 [2024-04-17 08:40:01.663317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.533 [2024-04-17 08:40:01.666290] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.533 [2024-04-17 08:40:01.666481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.533 [2024-04-17 08:40:01.666501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.533 [2024-04-17 08:40:01.669504] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.533 [2024-04-17 08:40:01.669575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.533 [2024-04-17 08:40:01.669593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.533 [2024-04-17 08:40:01.672716] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.533 [2024-04-17 08:40:01.672832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.533 [2024-04-17 08:40:01.672849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.533 [2024-04-17 08:40:01.676058] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.533 [2024-04-17 08:40:01.676188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.533 [2024-04-17 08:40:01.676206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.533 [2024-04-17 08:40:01.679307] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.533 [2024-04-17 08:40:01.679494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.533 [2024-04-17 08:40:01.679517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.533 [2024-04-17 08:40:01.682615] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.533 [2024-04-17 08:40:01.682764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.533 [2024-04-17 08:40:01.682782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.533 [2024-04-17 08:40:01.685828] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.533 [2024-04-17 08:40:01.685969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.533 [2024-04-17 08:40:01.685987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.533 [2024-04-17 08:40:01.689149] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.533 [2024-04-17 08:40:01.689326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.533 [2024-04-17 08:40:01.689346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.533 [2024-04-17 08:40:01.692432] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.533 [2024-04-17 08:40:01.692598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.533 [2024-04-17 08:40:01.692615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.533 [2024-04-17 08:40:01.695686] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.533 [2024-04-17 08:40:01.695880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.533 [2024-04-17 08:40:01.695897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.533 [2024-04-17 08:40:01.698934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.533 [2024-04-17 08:40:01.699074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.533 [2024-04-17 08:40:01.699099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.533 [2024-04-17 08:40:01.702192] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.533 [2024-04-17 08:40:01.702328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.533 [2024-04-17 08:40:01.702344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.533 [2024-04-17 08:40:01.705452] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.533 [2024-04-17 08:40:01.705626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.533 [2024-04-17 08:40:01.705644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.533 [2024-04-17 08:40:01.708666] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.533 [2024-04-17 08:40:01.708837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.534 [2024-04-17 08:40:01.708854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.534 [2024-04-17 08:40:01.711930] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.534 [2024-04-17 08:40:01.712061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.534 [2024-04-17 08:40:01.712079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.534 [2024-04-17 08:40:01.715288] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.534 [2024-04-17 08:40:01.715467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.534 [2024-04-17 08:40:01.715485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.534 [2024-04-17 08:40:01.718556] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.534 [2024-04-17 08:40:01.718749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.534 [2024-04-17 08:40:01.718766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.534 [2024-04-17 08:40:01.721790] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.534 [2024-04-17 08:40:01.721978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.534 [2024-04-17 08:40:01.721996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.534 [2024-04-17 08:40:01.725092] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.534 [2024-04-17 08:40:01.725270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.534 [2024-04-17 08:40:01.725288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.534 [2024-04-17 08:40:01.728317] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.534 [2024-04-17 08:40:01.728477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.534 [2024-04-17 08:40:01.728494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.534 [2024-04-17 08:40:01.731680] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.534 [2024-04-17 08:40:01.731857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.534 [2024-04-17 08:40:01.731874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.534 [2024-04-17 08:40:01.734895] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.534 [2024-04-17 08:40:01.735079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.534 [2024-04-17 08:40:01.735096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.534 [2024-04-17 08:40:01.738199] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.534 [2024-04-17 08:40:01.738329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.534 [2024-04-17 08:40:01.738346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.534 [2024-04-17 08:40:01.741389] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.534 [2024-04-17 08:40:01.741598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.534 [2024-04-17 08:40:01.741616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.534 [2024-04-17 08:40:01.744626] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.534 [2024-04-17 08:40:01.744825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.534 [2024-04-17 08:40:01.744841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.534 [2024-04-17 08:40:01.747797] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.534 [2024-04-17 08:40:01.747970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.534 [2024-04-17 08:40:01.747988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.534 [2024-04-17 08:40:01.750973] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.534 [2024-04-17 08:40:01.751123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.534 [2024-04-17 08:40:01.751141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.534 [2024-04-17 08:40:01.754262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.534 [2024-04-17 08:40:01.754409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.534 [2024-04-17 08:40:01.754426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.534 [2024-04-17 08:40:01.757557] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.534 [2024-04-17 08:40:01.757744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.534 [2024-04-17 08:40:01.757761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.534 [2024-04-17 08:40:01.760693] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.534 [2024-04-17 08:40:01.760888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.534 [2024-04-17 08:40:01.760904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.534 [2024-04-17 08:40:01.763882] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.534 [2024-04-17 08:40:01.764074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.534 [2024-04-17 08:40:01.764091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.534 [2024-04-17 08:40:01.767129] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.534 [2024-04-17 08:40:01.767293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.534 [2024-04-17 08:40:01.767316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.534 [2024-04-17 08:40:01.770355] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.534 [2024-04-17 08:40:01.770521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.534 [2024-04-17 08:40:01.770541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.534 [2024-04-17 08:40:01.773653] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.534 [2024-04-17 08:40:01.773857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.534 [2024-04-17 08:40:01.773873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.534 [2024-04-17 08:40:01.776905] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.534 [2024-04-17 08:40:01.777131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.534 [2024-04-17 08:40:01.777149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.534 [2024-04-17 08:40:01.780178] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.534 [2024-04-17 08:40:01.780378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.534 [2024-04-17 08:40:01.780395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.534 [2024-04-17 08:40:01.783545] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.534 [2024-04-17 08:40:01.783703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.534 [2024-04-17 08:40:01.783721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.534 [2024-04-17 08:40:01.786830] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.534 [2024-04-17 08:40:01.786963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.534 [2024-04-17 08:40:01.786981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.534 [2024-04-17 08:40:01.790026] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.534 [2024-04-17 08:40:01.790195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.534 [2024-04-17 08:40:01.790213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.534 [2024-04-17 08:40:01.793229] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.534 [2024-04-17 08:40:01.793376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.534 [2024-04-17 08:40:01.793393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.534 [2024-04-17 08:40:01.796317] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.534 [2024-04-17 08:40:01.796515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.535 [2024-04-17 08:40:01.796531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.535 [2024-04-17 08:40:01.799449] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.535 [2024-04-17 08:40:01.799637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.535 [2024-04-17 08:40:01.799653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.535 [2024-04-17 08:40:01.802526] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.535 [2024-04-17 08:40:01.802636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.535 [2024-04-17 08:40:01.802653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.535 [2024-04-17 08:40:01.805739] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.535 [2024-04-17 08:40:01.805937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.535 [2024-04-17 08:40:01.805965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.535 [2024-04-17 08:40:01.809113] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.535 [2024-04-17 08:40:01.809333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.535 [2024-04-17 08:40:01.809349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.535 [2024-04-17 08:40:01.812459] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.535 [2024-04-17 08:40:01.812600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.535 [2024-04-17 08:40:01.812618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.535 [2024-04-17 08:40:01.815729] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.535 [2024-04-17 08:40:01.815921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.535 [2024-04-17 08:40:01.815939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.535 [2024-04-17 08:40:01.818912] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.535 [2024-04-17 08:40:01.819095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.535 [2024-04-17 08:40:01.819113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.535 [2024-04-17 08:40:01.822204] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.535 [2024-04-17 08:40:01.822391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.535 [2024-04-17 08:40:01.822420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.535 [2024-04-17 08:40:01.825333] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.535 [2024-04-17 08:40:01.825466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.535 [2024-04-17 08:40:01.825482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.535 [2024-04-17 08:40:01.828393] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.535 [2024-04-17 08:40:01.828540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.535 [2024-04-17 08:40:01.828557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.535 [2024-04-17 08:40:01.831676] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.535 [2024-04-17 08:40:01.831863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.535 [2024-04-17 08:40:01.831880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.535 [2024-04-17 08:40:01.834816] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.535 [2024-04-17 08:40:01.835038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.535 [2024-04-17 08:40:01.835054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.535 [2024-04-17 08:40:01.837909] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.535 [2024-04-17 08:40:01.838127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.535 [2024-04-17 08:40:01.838143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.535 [2024-04-17 08:40:01.840906] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.535 [2024-04-17 08:40:01.841075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.535 [2024-04-17 08:40:01.841091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.535 [2024-04-17 08:40:01.844059] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.535 [2024-04-17 08:40:01.844148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.535 [2024-04-17 08:40:01.844164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.535 [2024-04-17 08:40:01.847186] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.535 [2024-04-17 08:40:01.847327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.535 [2024-04-17 08:40:01.847343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.535 [2024-04-17 08:40:01.850244] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.535 [2024-04-17 08:40:01.850350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.535 [2024-04-17 08:40:01.850366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.535 [2024-04-17 08:40:01.853308] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.535 [2024-04-17 08:40:01.853511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.535 [2024-04-17 08:40:01.853527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.535 [2024-04-17 08:40:01.856322] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.535 [2024-04-17 08:40:01.856537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.535 [2024-04-17 08:40:01.856554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.535 [2024-04-17 08:40:01.859322] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.535 [2024-04-17 08:40:01.859467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.535 [2024-04-17 08:40:01.859487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.798 [2024-04-17 08:40:01.862471] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.798 [2024-04-17 08:40:01.862631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.799 [2024-04-17 08:40:01.862649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.799 [2024-04-17 08:40:01.865626] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.799 [2024-04-17 08:40:01.865816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.799 [2024-04-17 08:40:01.865832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.799 [2024-04-17 08:40:01.868757] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.799 [2024-04-17 08:40:01.868915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.799 [2024-04-17 08:40:01.868932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.799 [2024-04-17 08:40:01.871811] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.799 [2024-04-17 08:40:01.871947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.799 [2024-04-17 08:40:01.871963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.799 [2024-04-17 08:40:01.874847] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.799 [2024-04-17 08:40:01.874938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.799 [2024-04-17 08:40:01.874956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.799 [2024-04-17 08:40:01.877913] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.799 [2024-04-17 08:40:01.878120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.799 [2024-04-17 08:40:01.878137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.799 [2024-04-17 08:40:01.880981] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.799 [2024-04-17 08:40:01.881146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.799 [2024-04-17 08:40:01.881162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.799 [2024-04-17 08:40:01.883984] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.799 [2024-04-17 08:40:01.884058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.799 [2024-04-17 08:40:01.884074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.799 [2024-04-17 08:40:01.887276] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.799 [2024-04-17 08:40:01.887441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.799 [2024-04-17 08:40:01.887457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.799 [2024-04-17 08:40:01.890292] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.799 [2024-04-17 08:40:01.890484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.799 [2024-04-17 08:40:01.890500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.799 [2024-04-17 08:40:01.893412] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.799 [2024-04-17 08:40:01.893568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.799 [2024-04-17 08:40:01.893585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.799 [2024-04-17 08:40:01.896541] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.799 [2024-04-17 08:40:01.896681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.799 [2024-04-17 08:40:01.896698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.799 [2024-04-17 08:40:01.899663] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.799 [2024-04-17 08:40:01.899781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.799 [2024-04-17 08:40:01.899797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.799 [2024-04-17 08:40:01.902725] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.799 [2024-04-17 08:40:01.902973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.799 [2024-04-17 08:40:01.902990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.799 [2024-04-17 08:40:01.905731] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.799 [2024-04-17 08:40:01.905926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.799 [2024-04-17 08:40:01.905942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.799 [2024-04-17 08:40:01.908761] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.799 [2024-04-17 08:40:01.908851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.799 [2024-04-17 08:40:01.908868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.799 [2024-04-17 08:40:01.911902] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.799 [2024-04-17 08:40:01.912042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.799 [2024-04-17 08:40:01.912059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.799 [2024-04-17 08:40:01.915054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.799 [2024-04-17 08:40:01.915180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.799 [2024-04-17 08:40:01.915197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.799 [2024-04-17 08:40:01.918180] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.799 [2024-04-17 08:40:01.918353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.799 [2024-04-17 08:40:01.918370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.799 [2024-04-17 08:40:01.921195] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.799 [2024-04-17 08:40:01.921367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.799 [2024-04-17 08:40:01.921383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.799 [2024-04-17 08:40:01.924297] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.799 [2024-04-17 08:40:01.924473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.799 [2024-04-17 08:40:01.924490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.799 [2024-04-17 08:40:01.927324] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.799 [2024-04-17 08:40:01.927483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.799 [2024-04-17 08:40:01.927499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.799 [2024-04-17 08:40:01.930340] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.799 [2024-04-17 08:40:01.930563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.799 [2024-04-17 08:40:01.930579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.799 [2024-04-17 08:40:01.933413] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.799 [2024-04-17 08:40:01.933589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.799 [2024-04-17 08:40:01.933605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.799 [2024-04-17 08:40:01.936388] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.799 [2024-04-17 08:40:01.936531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.799 [2024-04-17 08:40:01.936548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.799 [2024-04-17 08:40:01.939368] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.799 [2024-04-17 08:40:01.939515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.799 [2024-04-17 08:40:01.939531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.799 [2024-04-17 08:40:01.942542] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.799 [2024-04-17 08:40:01.942705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.799 [2024-04-17 08:40:01.942734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.799 [2024-04-17 08:40:01.945594] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.799 [2024-04-17 08:40:01.945731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.800 [2024-04-17 08:40:01.945747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.800 [2024-04-17 08:40:01.948661] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.800 [2024-04-17 08:40:01.948816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.800 [2024-04-17 08:40:01.948833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.800 [2024-04-17 08:40:01.951709] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.800 [2024-04-17 08:40:01.951844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.800 [2024-04-17 08:40:01.951860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.800 [2024-04-17 08:40:01.954852] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.800 [2024-04-17 08:40:01.955024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.800 [2024-04-17 08:40:01.955040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.800 [2024-04-17 08:40:01.957828] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.800 [2024-04-17 08:40:01.958014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.800 [2024-04-17 08:40:01.958030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.800 [2024-04-17 08:40:01.960855] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.800 [2024-04-17 08:40:01.960919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.800 [2024-04-17 08:40:01.960935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.800 [2024-04-17 08:40:01.963939] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.800 [2024-04-17 08:40:01.964102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.800 [2024-04-17 08:40:01.964119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.800 [2024-04-17 08:40:01.967094] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.800 [2024-04-17 08:40:01.967223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.800 [2024-04-17 08:40:01.967239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.800 [2024-04-17 08:40:01.970149] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.800 [2024-04-17 08:40:01.970342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.800 [2024-04-17 08:40:01.970357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.800 [2024-04-17 08:40:01.973209] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.800 [2024-04-17 08:40:01.973374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.800 [2024-04-17 08:40:01.973403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.800 [2024-04-17 08:40:01.976265] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.800 [2024-04-17 08:40:01.976417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.800 [2024-04-17 08:40:01.976434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.800 [2024-04-17 08:40:01.979584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.800 [2024-04-17 08:40:01.979764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.800 [2024-04-17 08:40:01.979782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.800 [2024-04-17 08:40:01.982974] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.800 [2024-04-17 08:40:01.983084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.800 [2024-04-17 08:40:01.983102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.800 [2024-04-17 08:40:01.986332] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.800 [2024-04-17 08:40:01.986553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.800 [2024-04-17 08:40:01.986572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.800 [2024-04-17 08:40:01.989709] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.800 [2024-04-17 08:40:01.989886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.800 [2024-04-17 08:40:01.989903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.800 [2024-04-17 08:40:01.993068] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.800 [2024-04-17 08:40:01.993196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.800 [2024-04-17 08:40:01.993213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.800 [2024-04-17 08:40:01.996420] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.800 [2024-04-17 08:40:01.996592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.800 [2024-04-17 08:40:01.996609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.800 [2024-04-17 08:40:01.999807] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.800 [2024-04-17 08:40:02.000021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.800 [2024-04-17 08:40:02.000038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.800 [2024-04-17 08:40:02.002994] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.800 [2024-04-17 08:40:02.003205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.800 [2024-04-17 08:40:02.003222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.800 [2024-04-17 08:40:02.006195] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.800 [2024-04-17 08:40:02.006344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.800 [2024-04-17 08:40:02.006362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.800 [2024-04-17 08:40:02.009421] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.800 [2024-04-17 08:40:02.009614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.800 [2024-04-17 08:40:02.009630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.800 [2024-04-17 08:40:02.012631] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.800 [2024-04-17 08:40:02.012817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.800 [2024-04-17 08:40:02.012834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.800 [2024-04-17 08:40:02.015810] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.800 [2024-04-17 08:40:02.015983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.800 [2024-04-17 08:40:02.015999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.800 [2024-04-17 08:40:02.019046] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.800 [2024-04-17 08:40:02.019146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.800 [2024-04-17 08:40:02.019164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.800 [2024-04-17 08:40:02.022348] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.800 [2024-04-17 08:40:02.022571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.800 [2024-04-17 08:40:02.022593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.800 [2024-04-17 08:40:02.025487] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.800 [2024-04-17 08:40:02.025649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.800 [2024-04-17 08:40:02.025665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.800 [2024-04-17 08:40:02.028668] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.800 [2024-04-17 08:40:02.028854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.800 [2024-04-17 08:40:02.028870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.800 [2024-04-17 08:40:02.031804] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.800 [2024-04-17 08:40:02.031965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.800 [2024-04-17 08:40:02.031981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.801 [2024-04-17 08:40:02.035039] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.801 [2024-04-17 08:40:02.035173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.801 [2024-04-17 08:40:02.035189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.801 [2024-04-17 08:40:02.038180] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.801 [2024-04-17 08:40:02.038346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.801 [2024-04-17 08:40:02.038362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.801 [2024-04-17 08:40:02.041202] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.801 [2024-04-17 08:40:02.041388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.801 [2024-04-17 08:40:02.041404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.801 [2024-04-17 08:40:02.044329] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.801 [2024-04-17 08:40:02.044526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.801 [2024-04-17 08:40:02.044542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.801 [2024-04-17 08:40:02.047319] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.801 [2024-04-17 08:40:02.047492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.801 [2024-04-17 08:40:02.047509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.801 [2024-04-17 08:40:02.050248] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.801 [2024-04-17 08:40:02.050360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.801 [2024-04-17 08:40:02.050377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.801 [2024-04-17 08:40:02.053374] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.801 [2024-04-17 08:40:02.053562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.801 [2024-04-17 08:40:02.053578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.801 [2024-04-17 08:40:02.056499] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.801 [2024-04-17 08:40:02.056650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.801 [2024-04-17 08:40:02.056665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.801 [2024-04-17 08:40:02.059619] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.801 [2024-04-17 08:40:02.059898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.801 [2024-04-17 08:40:02.059924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.801 [2024-04-17 08:40:02.063078] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.801 [2024-04-17 08:40:02.063231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.801 [2024-04-17 08:40:02.063250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.801 [2024-04-17 08:40:02.066507] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.801 [2024-04-17 08:40:02.066636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.801 [2024-04-17 08:40:02.066656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.801 [2024-04-17 08:40:02.069968] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.801 [2024-04-17 08:40:02.070136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.801 [2024-04-17 08:40:02.070157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.801 [2024-04-17 08:40:02.073378] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.801 [2024-04-17 08:40:02.073677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.801 [2024-04-17 08:40:02.073697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.801 [2024-04-17 08:40:02.076794] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.801 [2024-04-17 08:40:02.076981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.801 [2024-04-17 08:40:02.077001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.801 [2024-04-17 08:40:02.080249] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.801 [2024-04-17 08:40:02.080475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.801 [2024-04-17 08:40:02.080502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.801 [2024-04-17 08:40:02.083683] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.801 [2024-04-17 08:40:02.083759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.801 [2024-04-17 08:40:02.083780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.801 [2024-04-17 08:40:02.087126] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.801 [2024-04-17 08:40:02.087278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.801 [2024-04-17 08:40:02.087298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.801 [2024-04-17 08:40:02.090638] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.801 [2024-04-17 08:40:02.090786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.801 [2024-04-17 08:40:02.090805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.801 [2024-04-17 08:40:02.094020] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.801 [2024-04-17 08:40:02.094192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.801 [2024-04-17 08:40:02.094210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.801 [2024-04-17 08:40:02.097157] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.801 [2024-04-17 08:40:02.097292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.801 [2024-04-17 08:40:02.097310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.801 [2024-04-17 08:40:02.100508] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.801 [2024-04-17 08:40:02.100638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.801 [2024-04-17 08:40:02.100657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.801 [2024-04-17 08:40:02.103739] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.801 [2024-04-17 08:40:02.103924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.801 [2024-04-17 08:40:02.103941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.801 [2024-04-17 08:40:02.106781] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.801 [2024-04-17 08:40:02.106953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.801 [2024-04-17 08:40:02.106972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.801 [2024-04-17 08:40:02.109970] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.801 [2024-04-17 08:40:02.110157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.801 [2024-04-17 08:40:02.110174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.801 [2024-04-17 08:40:02.113134] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.801 [2024-04-17 08:40:02.113297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.801 [2024-04-17 08:40:02.113313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:28.801 [2024-04-17 08:40:02.116188] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.801 [2024-04-17 08:40:02.116347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.801 [2024-04-17 08:40:02.116363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:28.801 [2024-04-17 08:40:02.119501] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.801 [2024-04-17 08:40:02.119665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.801 [2024-04-17 08:40:02.119681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:28.801 [2024-04-17 08:40:02.122554] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.802 [2024-04-17 08:40:02.122747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.802 [2024-04-17 08:40:02.122765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:28.802 [2024-04-17 08:40:02.125689] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:28.802 [2024-04-17 08:40:02.125838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:28.802 [2024-04-17 08:40:02.125855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.062 [2024-04-17 08:40:02.128800] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.062 [2024-04-17 08:40:02.128967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.062 [2024-04-17 08:40:02.128983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.062 [2024-04-17 08:40:02.131923] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.062 [2024-04-17 08:40:02.132063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.062 [2024-04-17 08:40:02.132080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:29.062 [2024-04-17 08:40:02.135169] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.062 [2024-04-17 08:40:02.135353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.062 [2024-04-17 08:40:02.135370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:29.062 [2024-04-17 08:40:02.138203] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.062 [2024-04-17 08:40:02.138374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.062 [2024-04-17 08:40:02.138403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.062 [2024-04-17 08:40:02.141282] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.062 [2024-04-17 08:40:02.141428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.062 [2024-04-17 08:40:02.141445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.062 [2024-04-17 08:40:02.144569] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.062 [2024-04-17 08:40:02.144755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.062 [2024-04-17 08:40:02.144772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:29.062 [2024-04-17 08:40:02.147794] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.062 [2024-04-17 08:40:02.147938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.062 [2024-04-17 08:40:02.147963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:29.062 [2024-04-17 08:40:02.151018] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.062 [2024-04-17 08:40:02.151209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.062 [2024-04-17 08:40:02.151226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.062 [2024-04-17 08:40:02.154253] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.062 [2024-04-17 08:40:02.154385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.062 [2024-04-17 08:40:02.154416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.062 [2024-04-17 08:40:02.157590] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.062 [2024-04-17 08:40:02.157709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.062 [2024-04-17 08:40:02.157733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:29.062 [2024-04-17 08:40:02.161007] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.063 [2024-04-17 08:40:02.161194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.063 [2024-04-17 08:40:02.161218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:29.063 [2024-04-17 08:40:02.164257] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.063 [2024-04-17 08:40:02.164429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.063 [2024-04-17 08:40:02.164447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.063 [2024-04-17 08:40:02.167665] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.063 [2024-04-17 08:40:02.167800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.063 [2024-04-17 08:40:02.167827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.063 [2024-04-17 08:40:02.171016] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.063 [2024-04-17 08:40:02.171181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.063 [2024-04-17 08:40:02.171218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:29.063 [2024-04-17 08:40:02.174323] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.063 [2024-04-17 08:40:02.174520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.063 [2024-04-17 08:40:02.174543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:29.063 [2024-04-17 08:40:02.177498] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.063 [2024-04-17 08:40:02.177694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.063 [2024-04-17 08:40:02.177736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.063 [2024-04-17 08:40:02.180774] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.063 [2024-04-17 08:40:02.180936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.063 [2024-04-17 08:40:02.180960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.063 [2024-04-17 08:40:02.183971] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.063 [2024-04-17 08:40:02.184167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.063 [2024-04-17 08:40:02.184183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:29.063 [2024-04-17 08:40:02.187178] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.063 [2024-04-17 08:40:02.187326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.063 [2024-04-17 08:40:02.187343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:29.063 [2024-04-17 08:40:02.190157] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.063 [2024-04-17 08:40:02.190311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.063 [2024-04-17 08:40:02.190330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.063 [2024-04-17 08:40:02.193200] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.063 [2024-04-17 08:40:02.193363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.063 [2024-04-17 08:40:02.193379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.063 [2024-04-17 08:40:02.196235] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.063 [2024-04-17 08:40:02.196411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.063 [2024-04-17 08:40:02.196440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:29.063 [2024-04-17 08:40:02.199483] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.063 [2024-04-17 08:40:02.199664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.063 [2024-04-17 08:40:02.199680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:29.063 [2024-04-17 08:40:02.202584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.063 [2024-04-17 08:40:02.202744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.063 [2024-04-17 08:40:02.202761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.063 [2024-04-17 08:40:02.205692] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.063 [2024-04-17 08:40:02.205825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.063 [2024-04-17 08:40:02.205842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.063 [2024-04-17 08:40:02.208867] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.063 [2024-04-17 08:40:02.209063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.063 [2024-04-17 08:40:02.209080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:29.063 [2024-04-17 08:40:02.211972] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.063 [2024-04-17 08:40:02.212129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.063 [2024-04-17 08:40:02.212144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:29.063 [2024-04-17 08:40:02.215233] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.063 [2024-04-17 08:40:02.215442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.063 [2024-04-17 08:40:02.215459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.063 [2024-04-17 08:40:02.218304] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.063 [2024-04-17 08:40:02.218509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.063 [2024-04-17 08:40:02.218526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.063 [2024-04-17 08:40:02.221426] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.063 [2024-04-17 08:40:02.221543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.063 [2024-04-17 08:40:02.221559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:29.063 [2024-04-17 08:40:02.224569] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.063 [2024-04-17 08:40:02.224734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.063 [2024-04-17 08:40:02.224750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:29.063 [2024-04-17 08:40:02.227691] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.063 [2024-04-17 08:40:02.227835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.063 [2024-04-17 08:40:02.227852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.063 [2024-04-17 08:40:02.230721] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.063 [2024-04-17 08:40:02.230911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.063 [2024-04-17 08:40:02.230927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.063 [2024-04-17 08:40:02.233818] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.063 [2024-04-17 08:40:02.234007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.063 [2024-04-17 08:40:02.234023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:29.063 [2024-04-17 08:40:02.236842] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.063 [2024-04-17 08:40:02.237000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.063 [2024-04-17 08:40:02.237016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:29.063 [2024-04-17 08:40:02.239929] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.063 [2024-04-17 08:40:02.240128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.063 [2024-04-17 08:40:02.240145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.063 [2024-04-17 08:40:02.242801] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.063 [2024-04-17 08:40:02.242992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.063 [2024-04-17 08:40:02.243015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.063 [2024-04-17 08:40:02.245754] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.063 [2024-04-17 08:40:02.245876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.064 [2024-04-17 08:40:02.245891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:29.064 [2024-04-17 08:40:02.248855] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.064 [2024-04-17 08:40:02.249028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.064 [2024-04-17 08:40:02.249045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:29.064 [2024-04-17 08:40:02.251840] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.064 [2024-04-17 08:40:02.252042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.064 [2024-04-17 08:40:02.252058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.064 [2024-04-17 08:40:02.254790] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.064 [2024-04-17 08:40:02.254969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.064 [2024-04-17 08:40:02.254987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.064 [2024-04-17 08:40:02.257804] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.064 [2024-04-17 08:40:02.258026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.064 [2024-04-17 08:40:02.258045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:29.064 [2024-04-17 08:40:02.260834] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.064 [2024-04-17 08:40:02.260980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.064 [2024-04-17 08:40:02.260997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:29.064 [2024-04-17 08:40:02.263922] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.064 [2024-04-17 08:40:02.264076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.064 [2024-04-17 08:40:02.264091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.064 [2024-04-17 08:40:02.266906] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.064 [2024-04-17 08:40:02.267026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.064 [2024-04-17 08:40:02.267044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.064 [2024-04-17 08:40:02.269870] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.064 [2024-04-17 08:40:02.270081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.064 [2024-04-17 08:40:02.270097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:29.064 [2024-04-17 08:40:02.272980] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.064 [2024-04-17 08:40:02.273182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.064 [2024-04-17 08:40:02.273198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:29.064 [2024-04-17 08:40:02.275991] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.064 [2024-04-17 08:40:02.276134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.064 [2024-04-17 08:40:02.276150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.064 [2024-04-17 08:40:02.278909] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.064 [2024-04-17 08:40:02.279072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.064 [2024-04-17 08:40:02.279095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.064 [2024-04-17 08:40:02.281839] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.064 [2024-04-17 08:40:02.282044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.064 [2024-04-17 08:40:02.282060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:29.064 [2024-04-17 08:40:02.284885] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.064 [2024-04-17 08:40:02.285072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.064 [2024-04-17 08:40:02.285087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:29.064 [2024-04-17 08:40:02.287893] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.064 [2024-04-17 08:40:02.288090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.064 [2024-04-17 08:40:02.288106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.064 [2024-04-17 08:40:02.290983] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.064 [2024-04-17 08:40:02.291073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.064 [2024-04-17 08:40:02.291092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.064 [2024-04-17 08:40:02.294044] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.064 [2024-04-17 08:40:02.294242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.064 [2024-04-17 08:40:02.294260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:29.064 [2024-04-17 08:40:02.297071] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.064 [2024-04-17 08:40:02.297214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.064 [2024-04-17 08:40:02.297230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:29.064 [2024-04-17 08:40:02.300072] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.064 [2024-04-17 08:40:02.300219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.064 [2024-04-17 08:40:02.300235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.064 [2024-04-17 08:40:02.303135] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.064 [2024-04-17 08:40:02.303298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.064 [2024-04-17 08:40:02.303314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.064 [2024-04-17 08:40:02.306155] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.064 [2024-04-17 08:40:02.306301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.064 [2024-04-17 08:40:02.306318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:29.064 [2024-04-17 08:40:02.309191] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.064 [2024-04-17 08:40:02.309381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.064 [2024-04-17 08:40:02.309397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:29.064 [2024-04-17 08:40:02.312312] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.064 [2024-04-17 08:40:02.312465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.064 [2024-04-17 08:40:02.312481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.064 [2024-04-17 08:40:02.315332] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.064 [2024-04-17 08:40:02.315468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.064 [2024-04-17 08:40:02.315484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.064 [2024-04-17 08:40:02.318304] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.064 [2024-04-17 08:40:02.318484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.064 [2024-04-17 08:40:02.318503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:29.064 [2024-04-17 08:40:02.321307] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.064 [2024-04-17 08:40:02.321476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.064 [2024-04-17 08:40:02.321492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:29.064 [2024-04-17 08:40:02.324375] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.064 [2024-04-17 08:40:02.324630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.064 [2024-04-17 08:40:02.324647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.064 [2024-04-17 08:40:02.327344] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.064 [2024-04-17 08:40:02.327522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.065 [2024-04-17 08:40:02.327538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.065 [2024-04-17 08:40:02.330366] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.065 [2024-04-17 08:40:02.330508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.065 [2024-04-17 08:40:02.330527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:29.065 [2024-04-17 08:40:02.333431] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.065 [2024-04-17 08:40:02.333632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.065 [2024-04-17 08:40:02.333648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:29.065 [2024-04-17 08:40:02.336446] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.065 [2024-04-17 08:40:02.336634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.065 [2024-04-17 08:40:02.336650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.065 [2024-04-17 08:40:02.339494] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.065 [2024-04-17 08:40:02.339661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.065 [2024-04-17 08:40:02.339678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.065 [2024-04-17 08:40:02.342488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.065 [2024-04-17 08:40:02.342707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.065 [2024-04-17 08:40:02.342724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:29.065 [2024-04-17 08:40:02.345435] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.065 [2024-04-17 08:40:02.345652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.065 [2024-04-17 08:40:02.345668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:29.065 [2024-04-17 08:40:02.348363] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.065 [2024-04-17 08:40:02.348585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.065 [2024-04-17 08:40:02.348601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.065 [2024-04-17 08:40:02.351500] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.065 [2024-04-17 08:40:02.351602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.065 [2024-04-17 08:40:02.351617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.065 [2024-04-17 08:40:02.354643] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.065 [2024-04-17 08:40:02.354711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.065 [2024-04-17 08:40:02.354729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:29.065 [2024-04-17 08:40:02.357774] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.065 [2024-04-17 08:40:02.357916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.065 [2024-04-17 08:40:02.357932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:29.065 [2024-04-17 08:40:02.360886] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.065 [2024-04-17 08:40:02.361043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.065 [2024-04-17 08:40:02.361060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.065 [2024-04-17 08:40:02.363986] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.065 [2024-04-17 08:40:02.364186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.065 [2024-04-17 08:40:02.364202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.065 [2024-04-17 08:40:02.367145] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.065 [2024-04-17 08:40:02.367260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.065 [2024-04-17 08:40:02.367279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:29.065 [2024-04-17 08:40:02.370272] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.065 [2024-04-17 08:40:02.370484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.065 [2024-04-17 08:40:02.370507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:29.065 [2024-04-17 08:40:02.373344] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.065 [2024-04-17 08:40:02.373504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.065 [2024-04-17 08:40:02.373520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.065 [2024-04-17 08:40:02.376412] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.065 [2024-04-17 08:40:02.376533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.065 [2024-04-17 08:40:02.376548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.065 [2024-04-17 08:40:02.379583] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.065 [2024-04-17 08:40:02.379767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.065 [2024-04-17 08:40:02.379784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:29.065 [2024-04-17 08:40:02.382864] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.065 [2024-04-17 08:40:02.383039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.065 [2024-04-17 08:40:02.383056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:29.065 [2024-04-17 08:40:02.386021] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.065 [2024-04-17 08:40:02.386202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.065 [2024-04-17 08:40:02.386219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.065 [2024-04-17 08:40:02.389028] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.065 [2024-04-17 08:40:02.389211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.065 [2024-04-17 08:40:02.389228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.326 [2024-04-17 08:40:02.392155] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.326 [2024-04-17 08:40:02.392278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.326 [2024-04-17 08:40:02.392294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:29.326 [2024-04-17 08:40:02.395437] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.326 [2024-04-17 08:40:02.395598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.326 [2024-04-17 08:40:02.395615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:29.326 [2024-04-17 08:40:02.398558] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.326 [2024-04-17 08:40:02.398696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.326 [2024-04-17 08:40:02.398714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.326 [2024-04-17 08:40:02.401707] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.326 [2024-04-17 08:40:02.401894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.326 [2024-04-17 08:40:02.401911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.326 [2024-04-17 08:40:02.404948] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.326 [2024-04-17 08:40:02.405095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.326 [2024-04-17 08:40:02.405113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:29.326 [2024-04-17 08:40:02.408067] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.326 [2024-04-17 08:40:02.408186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.326 [2024-04-17 08:40:02.408202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:29.326 [2024-04-17 08:40:02.411285] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.326 [2024-04-17 08:40:02.411466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.326 [2024-04-17 08:40:02.411484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.326 [2024-04-17 08:40:02.414326] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.326 [2024-04-17 08:40:02.414457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.326 [2024-04-17 08:40:02.414474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.326 [2024-04-17 08:40:02.417563] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.326 [2024-04-17 08:40:02.417755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.326 [2024-04-17 08:40:02.417773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:29.326 [2024-04-17 08:40:02.420760] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.326 [2024-04-17 08:40:02.420941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.326 [2024-04-17 08:40:02.420959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:29.326 [2024-04-17 08:40:02.423969] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.326 [2024-04-17 08:40:02.424091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.326 [2024-04-17 08:40:02.424108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.326 [2024-04-17 08:40:02.427277] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.326 [2024-04-17 08:40:02.427463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.326 [2024-04-17 08:40:02.427482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.326 [2024-04-17 08:40:02.430568] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.326 [2024-04-17 08:40:02.430804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.326 [2024-04-17 08:40:02.430836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:29.326 [2024-04-17 08:40:02.434118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.326 [2024-04-17 08:40:02.434496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.326 [2024-04-17 08:40:02.434523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:29.326 [2024-04-17 08:40:02.437355] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.326 [2024-04-17 08:40:02.437735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.326 [2024-04-17 08:40:02.437759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.326 [2024-04-17 08:40:02.440894] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.326 [2024-04-17 08:40:02.441132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.326 [2024-04-17 08:40:02.441152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.326 [2024-04-17 08:40:02.444214] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.326 [2024-04-17 08:40:02.444319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.326 [2024-04-17 08:40:02.444338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:29.326 [2024-04-17 08:40:02.447553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.326 [2024-04-17 08:40:02.447622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.326 [2024-04-17 08:40:02.447641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:29.326 [2024-04-17 08:40:02.450821] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.326 [2024-04-17 08:40:02.451053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.326 [2024-04-17 08:40:02.451080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.326 [2024-04-17 08:40:02.454096] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.326 [2024-04-17 08:40:02.454241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.326 [2024-04-17 08:40:02.454259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.326 [2024-04-17 08:40:02.457261] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.326 [2024-04-17 08:40:02.457497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.326 [2024-04-17 08:40:02.457516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:29.326 [2024-04-17 08:40:02.460624] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.326 [2024-04-17 08:40:02.460800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.326 [2024-04-17 08:40:02.460817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:29.326 [2024-04-17 08:40:02.463872] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.326 [2024-04-17 08:40:02.464045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.326 [2024-04-17 08:40:02.464063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.326 [2024-04-17 08:40:02.467150] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.326 [2024-04-17 08:40:02.467234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.326 [2024-04-17 08:40:02.467252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.326 [2024-04-17 08:40:02.470445] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.327 [2024-04-17 08:40:02.470662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.327 [2024-04-17 08:40:02.470681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:29.327 [2024-04-17 08:40:02.473496] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.327 [2024-04-17 08:40:02.473585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.327 [2024-04-17 08:40:02.473602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:29.327 [2024-04-17 08:40:02.476806] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.327 [2024-04-17 08:40:02.476907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.327 [2024-04-17 08:40:02.476926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.327 [2024-04-17 08:40:02.480053] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.327 [2024-04-17 08:40:02.480203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.327 [2024-04-17 08:40:02.480220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.327 [2024-04-17 08:40:02.483393] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.327 [2024-04-17 08:40:02.483569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.327 [2024-04-17 08:40:02.483587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:29.327 [2024-04-17 08:40:02.486651] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.327 [2024-04-17 08:40:02.486799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.327 [2024-04-17 08:40:02.486816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:29.327 [2024-04-17 08:40:02.490004] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.327 [2024-04-17 08:40:02.490232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.327 [2024-04-17 08:40:02.490250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.327 [2024-04-17 08:40:02.493181] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.327 [2024-04-17 08:40:02.493364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.327 [2024-04-17 08:40:02.493381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.327 [2024-04-17 08:40:02.496617] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.327 [2024-04-17 08:40:02.496719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.327 [2024-04-17 08:40:02.496736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:29.327 [2024-04-17 08:40:02.499896] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.327 [2024-04-17 08:40:02.500083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.327 [2024-04-17 08:40:02.500102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:29.327 [2024-04-17 08:40:02.503229] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.327 [2024-04-17 08:40:02.503388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.327 [2024-04-17 08:40:02.503421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.327 [2024-04-17 08:40:02.506488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.327 [2024-04-17 08:40:02.506667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.327 [2024-04-17 08:40:02.506696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.327 [2024-04-17 08:40:02.509710] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.327 [2024-04-17 08:40:02.509896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.327 [2024-04-17 08:40:02.509913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:29.327 [2024-04-17 08:40:02.512932] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.327 [2024-04-17 08:40:02.513118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.327 [2024-04-17 08:40:02.513135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:29.327 [2024-04-17 08:40:02.516293] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.327 [2024-04-17 08:40:02.516498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.327 [2024-04-17 08:40:02.516516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.327 [2024-04-17 08:40:02.519672] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.327 [2024-04-17 08:40:02.519827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.327 [2024-04-17 08:40:02.519845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.327 [2024-04-17 08:40:02.523032] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.327 [2024-04-17 08:40:02.523179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.327 [2024-04-17 08:40:02.523200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:29.327 [2024-04-17 08:40:02.526373] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.327 [2024-04-17 08:40:02.526588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.327 [2024-04-17 08:40:02.526607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:29.327 [2024-04-17 08:40:02.529586] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.327 [2024-04-17 08:40:02.529888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.327 [2024-04-17 08:40:02.529907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.327 [2024-04-17 08:40:02.532876] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.327 [2024-04-17 08:40:02.532994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.327 [2024-04-17 08:40:02.533022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.327 [2024-04-17 08:40:02.536161] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.327 [2024-04-17 08:40:02.536242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.327 [2024-04-17 08:40:02.536261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:29.327 [2024-04-17 08:40:02.539571] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.327 [2024-04-17 08:40:02.539805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.327 [2024-04-17 08:40:02.539822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:29.327 [2024-04-17 08:40:02.542813] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.327 [2024-04-17 08:40:02.542922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.327 [2024-04-17 08:40:02.542941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.327 [2024-04-17 08:40:02.546299] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.327 [2024-04-17 08:40:02.546562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.327 [2024-04-17 08:40:02.546586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.327 [2024-04-17 08:40:02.549416] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.327 [2024-04-17 08:40:02.549584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.327 [2024-04-17 08:40:02.549599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:29.327 [2024-04-17 08:40:02.552536] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.327 [2024-04-17 08:40:02.552715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.327 [2024-04-17 08:40:02.552731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:29.327 [2024-04-17 08:40:02.555677] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.327 [2024-04-17 08:40:02.555893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.327 [2024-04-17 08:40:02.555910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.327 [2024-04-17 08:40:02.558866] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.327 [2024-04-17 08:40:02.559007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.328 [2024-04-17 08:40:02.559023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.328 [2024-04-17 08:40:02.562130] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.328 [2024-04-17 08:40:02.562333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.328 [2024-04-17 08:40:02.562350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:29.328 [2024-04-17 08:40:02.565330] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.328 [2024-04-17 08:40:02.565616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.328 [2024-04-17 08:40:02.565640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:29.328 [2024-04-17 08:40:02.568406] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.328 [2024-04-17 08:40:02.568534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.328 [2024-04-17 08:40:02.568562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.328 [2024-04-17 08:40:02.571713] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.328 [2024-04-17 08:40:02.571981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.328 [2024-04-17 08:40:02.571999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.328 [2024-04-17 08:40:02.575047] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.328 [2024-04-17 08:40:02.575174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.328 [2024-04-17 08:40:02.575191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:29.328 [2024-04-17 08:40:02.578366] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.328 [2024-04-17 08:40:02.578584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.328 [2024-04-17 08:40:02.578608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:29.328 [2024-04-17 08:40:02.581653] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.328 [2024-04-17 08:40:02.581895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.328 [2024-04-17 08:40:02.581924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.328 [2024-04-17 08:40:02.585081] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.328 [2024-04-17 08:40:02.585227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.328 [2024-04-17 08:40:02.585251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.328 [2024-04-17 08:40:02.588468] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.328 [2024-04-17 08:40:02.588660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.328 [2024-04-17 08:40:02.588683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:29.328 [2024-04-17 08:40:02.591662] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.328 [2024-04-17 08:40:02.591889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.328 [2024-04-17 08:40:02.591906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:29.328 [2024-04-17 08:40:02.594716] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.328 [2024-04-17 08:40:02.594847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.328 [2024-04-17 08:40:02.594865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.328 [2024-04-17 08:40:02.597837] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.328 [2024-04-17 08:40:02.598114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.328 [2024-04-17 08:40:02.598132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.328 [2024-04-17 08:40:02.600935] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.328 [2024-04-17 08:40:02.601039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.328 [2024-04-17 08:40:02.601055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:29.328 [2024-04-17 08:40:02.604045] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.328 [2024-04-17 08:40:02.604195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.328 [2024-04-17 08:40:02.604212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:29.328 [2024-04-17 08:40:02.607233] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.328 [2024-04-17 08:40:02.607407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.328 [2024-04-17 08:40:02.607451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.328 [2024-04-17 08:40:02.610364] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.328 [2024-04-17 08:40:02.610469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.328 [2024-04-17 08:40:02.610486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.328 [2024-04-17 08:40:02.613520] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.328 [2024-04-17 08:40:02.613701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.328 [2024-04-17 08:40:02.613717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:29.328 [2024-04-17 08:40:02.616608] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.328 [2024-04-17 08:40:02.616728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.328 [2024-04-17 08:40:02.616744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:29.328 [2024-04-17 08:40:02.619715] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.328 [2024-04-17 08:40:02.619884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.328 [2024-04-17 08:40:02.619900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:29.328 [2024-04-17 08:40:02.622957] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.328 [2024-04-17 08:40:02.623126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.328 [2024-04-17 08:40:02.623144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:29.328 [2024-04-17 08:40:02.626092] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14b9d50) with pdu=0x2000190fef90 00:43:29.328 [2024-04-17 08:40:02.626288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:29.328 [2024-04-17 08:40:02.626306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:29.328 00:43:29.328 Latency(us) 00:43:29.328 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:29.328 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:43:29.328 nvme0n1 : 2.00 9337.07 1167.13 0.00 0.00 1710.10 1252.05 10474.31 00:43:29.328 =================================================================================================================== 00:43:29.328 Total : 9337.07 1167.13 0.00 0.00 1710.10 1252.05 10474.31 00:43:29.328 0 00:43:29.587 08:40:02 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:43:29.587 08:40:02 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:43:29.587 08:40:02 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:43:29.587 | .driver_specific 00:43:29.587 | .nvme_error 00:43:29.587 | .status_code 00:43:29.587 | .command_transient_transport_error' 00:43:29.587 08:40:02 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:43:29.587 08:40:02 -- host/digest.sh@71 -- # (( 602 > 0 )) 00:43:29.587 08:40:02 -- host/digest.sh@73 -- # killprocess 85294 00:43:29.587 08:40:02 -- common/autotest_common.sh@926 -- # '[' -z 85294 ']' 00:43:29.587 08:40:02 -- common/autotest_common.sh@930 -- # kill -0 85294 00:43:29.587 08:40:02 -- common/autotest_common.sh@931 -- # uname 00:43:29.587 08:40:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:43:29.587 08:40:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85294 00:43:29.587 killing process with pid 85294 00:43:29.587 Received shutdown signal, test time was about 2.000000 seconds 00:43:29.587 00:43:29.587 Latency(us) 00:43:29.587 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:29.587 =================================================================================================================== 00:43:29.587 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:29.587 08:40:02 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:43:29.587 08:40:02 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:43:29.587 08:40:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85294' 00:43:29.587 08:40:02 -- common/autotest_common.sh@945 -- # kill 85294 00:43:29.587 08:40:02 -- common/autotest_common.sh@950 -- # wait 85294 00:43:29.846 08:40:03 -- host/digest.sh@115 -- # killprocess 84990 00:43:29.846 08:40:03 -- common/autotest_common.sh@926 -- # '[' -z 84990 ']' 00:43:29.846 08:40:03 -- common/autotest_common.sh@930 -- # kill -0 84990 00:43:29.846 08:40:03 -- common/autotest_common.sh@931 -- # uname 00:43:29.846 08:40:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:43:29.846 08:40:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84990 00:43:29.846 killing process with pid 84990 00:43:29.846 08:40:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:43:29.846 08:40:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:43:29.846 08:40:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84990' 00:43:29.846 08:40:03 -- common/autotest_common.sh@945 -- # kill 84990 00:43:29.846 08:40:03 -- common/autotest_common.sh@950 -- # wait 84990 00:43:30.412 00:43:30.412 real 0m17.834s 00:43:30.412 user 0m33.367s 00:43:30.412 sys 0m4.557s 00:43:30.412 08:40:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:30.412 ************************************ 00:43:30.412 END TEST nvmf_digest_error 00:43:30.412 ************************************ 00:43:30.412 08:40:03 -- common/autotest_common.sh@10 -- # set +x 00:43:30.412 08:40:03 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:43:30.412 08:40:03 -- host/digest.sh@139 -- # nvmftestfini 00:43:30.412 08:40:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:43:30.412 08:40:03 -- nvmf/common.sh@116 -- # sync 00:43:30.412 08:40:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:43:30.412 08:40:03 -- nvmf/common.sh@119 -- # set +e 00:43:30.412 08:40:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:43:30.412 08:40:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:43:30.412 rmmod nvme_tcp 00:43:30.412 rmmod nvme_fabrics 00:43:30.412 rmmod nvme_keyring 00:43:30.412 08:40:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:43:30.412 08:40:03 -- nvmf/common.sh@123 -- # set -e 00:43:30.412 08:40:03 -- nvmf/common.sh@124 -- # return 0 00:43:30.412 08:40:03 -- nvmf/common.sh@477 -- # '[' -n 84990 ']' 00:43:30.412 08:40:03 -- nvmf/common.sh@478 -- # killprocess 84990 00:43:30.412 08:40:03 -- common/autotest_common.sh@926 -- # '[' -z 84990 ']' 00:43:30.412 08:40:03 -- common/autotest_common.sh@930 -- # kill -0 84990 00:43:30.412 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (84990) - No such process 00:43:30.412 Process with pid 84990 is not found 00:43:30.412 08:40:03 -- common/autotest_common.sh@953 -- # echo 'Process with pid 84990 is not found' 00:43:30.412 08:40:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:43:30.412 08:40:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:43:30.412 08:40:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:43:30.412 08:40:03 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:43:30.412 08:40:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:43:30.412 08:40:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:30.412 08:40:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:30.412 08:40:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:30.671 08:40:03 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:43:30.671 ************************************ 00:43:30.671 END TEST nvmf_digest 00:43:30.671 ************************************ 00:43:30.671 00:43:30.671 real 0m36.707s 00:43:30.672 user 1m8.016s 00:43:30.672 sys 0m9.378s 00:43:30.672 08:40:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:30.672 08:40:03 -- common/autotest_common.sh@10 -- # set +x 00:43:30.672 08:40:03 -- nvmf/nvmf.sh@109 -- # [[ 1 -eq 1 ]] 00:43:30.672 08:40:03 -- nvmf/nvmf.sh@109 -- # [[ tcp == \t\c\p ]] 00:43:30.672 08:40:03 -- nvmf/nvmf.sh@111 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:43:30.672 08:40:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:43:30.672 08:40:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:43:30.672 08:40:03 -- common/autotest_common.sh@10 -- # set +x 00:43:30.672 ************************************ 00:43:30.672 START TEST nvmf_mdns_discovery 00:43:30.672 ************************************ 00:43:30.672 08:40:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:43:30.672 * Looking for test storage... 00:43:30.672 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:43:30.672 08:40:03 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:43:30.672 08:40:03 -- nvmf/common.sh@7 -- # uname -s 00:43:30.672 08:40:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:30.672 08:40:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:30.672 08:40:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:30.672 08:40:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:30.672 08:40:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:30.672 08:40:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:30.672 08:40:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:30.672 08:40:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:30.672 08:40:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:30.672 08:40:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:30.672 08:40:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:43:30.672 08:40:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:43:30.672 08:40:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:30.672 08:40:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:30.672 08:40:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:43:30.672 08:40:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:43:30.672 08:40:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:30.672 08:40:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:30.672 08:40:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:30.672 08:40:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:30.672 08:40:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:30.672 08:40:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:30.672 08:40:03 -- paths/export.sh@5 -- # export PATH 00:43:30.672 08:40:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:30.672 08:40:03 -- nvmf/common.sh@46 -- # : 0 00:43:30.672 08:40:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:43:30.672 08:40:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:43:30.672 08:40:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:43:30.672 08:40:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:30.672 08:40:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:30.672 08:40:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:43:30.672 08:40:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:43:30.672 08:40:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:43:30.672 08:40:03 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:43:30.672 08:40:03 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:43:30.672 08:40:03 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:43:30.672 08:40:03 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:43:30.672 08:40:03 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:43:30.672 08:40:03 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:43:30.672 08:40:03 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:43:30.672 08:40:03 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:43:30.672 08:40:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:43:30.672 08:40:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:30.672 08:40:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:43:30.672 08:40:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:43:30.672 08:40:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:43:30.672 08:40:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:30.672 08:40:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:30.672 08:40:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:30.672 08:40:03 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:43:30.672 08:40:03 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:43:30.672 08:40:03 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:43:30.672 08:40:03 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:43:30.672 08:40:03 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:43:30.672 08:40:03 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:43:30.672 08:40:03 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:30.672 08:40:03 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:30.672 08:40:03 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:43:30.672 08:40:03 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:43:30.672 08:40:03 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:43:30.672 08:40:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:43:30.672 08:40:03 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:43:30.672 08:40:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:30.672 08:40:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:43:30.672 08:40:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:43:30.672 08:40:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:43:30.672 08:40:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:43:30.672 08:40:03 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:43:30.672 08:40:03 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:43:30.672 Cannot find device "nvmf_tgt_br" 00:43:30.672 08:40:03 -- nvmf/common.sh@154 -- # true 00:43:30.672 08:40:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:43:30.931 Cannot find device "nvmf_tgt_br2" 00:43:30.931 08:40:04 -- nvmf/common.sh@155 -- # true 00:43:30.931 08:40:04 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:43:30.931 08:40:04 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:43:30.931 Cannot find device "nvmf_tgt_br" 00:43:30.931 08:40:04 -- nvmf/common.sh@157 -- # true 00:43:30.931 08:40:04 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:43:30.931 Cannot find device "nvmf_tgt_br2" 00:43:30.931 08:40:04 -- nvmf/common.sh@158 -- # true 00:43:30.931 08:40:04 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:43:30.931 08:40:04 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:43:30.931 08:40:04 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:43:30.931 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:43:30.931 08:40:04 -- nvmf/common.sh@161 -- # true 00:43:30.931 08:40:04 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:43:30.931 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:43:30.931 08:40:04 -- nvmf/common.sh@162 -- # true 00:43:30.931 08:40:04 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:43:30.931 08:40:04 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:43:30.931 08:40:04 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:43:30.931 08:40:04 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:43:30.931 08:40:04 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:43:30.931 08:40:04 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:43:30.931 08:40:04 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:43:30.931 08:40:04 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:43:30.931 08:40:04 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:43:30.931 08:40:04 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:43:30.931 08:40:04 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:43:30.931 08:40:04 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:43:30.931 08:40:04 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:43:30.931 08:40:04 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:43:30.931 08:40:04 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:43:30.931 08:40:04 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:43:30.931 08:40:04 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:43:30.931 08:40:04 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:43:30.931 08:40:04 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:43:30.931 08:40:04 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:43:30.931 08:40:04 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:43:30.931 08:40:04 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:43:30.931 08:40:04 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:43:30.931 08:40:04 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:43:30.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:30.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:43:30.931 00:43:30.931 --- 10.0.0.2 ping statistics --- 00:43:30.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:30.931 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:43:30.931 08:40:04 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:43:30.931 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:43:30.931 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:43:30.931 00:43:30.931 --- 10.0.0.3 ping statistics --- 00:43:30.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:30.931 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:43:30.932 08:40:04 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:43:31.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:31.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:43:31.190 00:43:31.190 --- 10.0.0.1 ping statistics --- 00:43:31.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:31.190 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:43:31.190 08:40:04 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:31.190 08:40:04 -- nvmf/common.sh@421 -- # return 0 00:43:31.190 08:40:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:43:31.190 08:40:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:31.190 08:40:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:43:31.190 08:40:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:43:31.190 08:40:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:31.190 08:40:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:43:31.190 08:40:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:43:31.190 08:40:04 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:43:31.190 08:40:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:43:31.190 08:40:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:43:31.190 08:40:04 -- common/autotest_common.sh@10 -- # set +x 00:43:31.190 08:40:04 -- nvmf/common.sh@469 -- # nvmfpid=85594 00:43:31.190 08:40:04 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:43:31.190 08:40:04 -- nvmf/common.sh@470 -- # waitforlisten 85594 00:43:31.190 08:40:04 -- common/autotest_common.sh@819 -- # '[' -z 85594 ']' 00:43:31.190 08:40:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:31.190 08:40:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:43:31.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:31.190 08:40:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:31.190 08:40:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:43:31.190 08:40:04 -- common/autotest_common.sh@10 -- # set +x 00:43:31.190 [2024-04-17 08:40:04.364320] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:43:31.190 [2024-04-17 08:40:04.364401] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:31.190 [2024-04-17 08:40:04.505129] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:31.447 [2024-04-17 08:40:04.603513] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:43:31.447 [2024-04-17 08:40:04.603635] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:31.447 [2024-04-17 08:40:04.603642] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:31.448 [2024-04-17 08:40:04.603647] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:31.448 [2024-04-17 08:40:04.603672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:43:32.013 08:40:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:43:32.013 08:40:05 -- common/autotest_common.sh@852 -- # return 0 00:43:32.013 08:40:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:43:32.013 08:40:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:43:32.013 08:40:05 -- common/autotest_common.sh@10 -- # set +x 00:43:32.013 08:40:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:32.013 08:40:05 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:43:32.013 08:40:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:32.013 08:40:05 -- common/autotest_common.sh@10 -- # set +x 00:43:32.013 08:40:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:32.013 08:40:05 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:43:32.013 08:40:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:32.013 08:40:05 -- common/autotest_common.sh@10 -- # set +x 00:43:32.271 08:40:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:32.271 08:40:05 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:32.271 08:40:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:32.271 08:40:05 -- common/autotest_common.sh@10 -- # set +x 00:43:32.271 [2024-04-17 08:40:05.376862] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:32.271 08:40:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:32.271 08:40:05 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:43:32.271 08:40:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:32.271 08:40:05 -- common/autotest_common.sh@10 -- # set +x 00:43:32.271 [2024-04-17 08:40:05.388964] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:43:32.271 08:40:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:32.271 08:40:05 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:43:32.271 08:40:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:32.271 08:40:05 -- common/autotest_common.sh@10 -- # set +x 00:43:32.271 null0 00:43:32.271 08:40:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:32.271 08:40:05 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:43:32.271 08:40:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:32.271 08:40:05 -- common/autotest_common.sh@10 -- # set +x 00:43:32.271 null1 00:43:32.271 08:40:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:32.271 08:40:05 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:43:32.271 08:40:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:32.271 08:40:05 -- common/autotest_common.sh@10 -- # set +x 00:43:32.271 null2 00:43:32.272 08:40:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:32.272 08:40:05 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:43:32.272 08:40:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:32.272 08:40:05 -- common/autotest_common.sh@10 -- # set +x 00:43:32.272 null3 00:43:32.272 08:40:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:32.272 08:40:05 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:43:32.272 08:40:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:32.272 08:40:05 -- common/autotest_common.sh@10 -- # set +x 00:43:32.272 08:40:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:32.272 08:40:05 -- host/mdns_discovery.sh@47 -- # hostpid=85644 00:43:32.272 08:40:05 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:43:32.272 08:40:05 -- host/mdns_discovery.sh@48 -- # waitforlisten 85644 /tmp/host.sock 00:43:32.272 08:40:05 -- common/autotest_common.sh@819 -- # '[' -z 85644 ']' 00:43:32.272 08:40:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:43:32.272 08:40:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:43:32.272 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:43:32.272 08:40:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:43:32.272 08:40:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:43:32.272 08:40:05 -- common/autotest_common.sh@10 -- # set +x 00:43:32.272 [2024-04-17 08:40:05.506269] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:43:32.272 [2024-04-17 08:40:05.506351] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85644 ] 00:43:32.540 [2024-04-17 08:40:05.642290] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:32.540 [2024-04-17 08:40:05.740560] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:43:32.540 [2024-04-17 08:40:05.740712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:43:33.145 08:40:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:43:33.145 08:40:06 -- common/autotest_common.sh@852 -- # return 0 00:43:33.145 08:40:06 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:43:33.145 08:40:06 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:43:33.145 08:40:06 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:43:33.408 08:40:06 -- host/mdns_discovery.sh@57 -- # avahipid=85673 00:43:33.408 08:40:06 -- host/mdns_discovery.sh@58 -- # sleep 1 00:43:33.408 08:40:06 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:43:33.408 08:40:06 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:43:33.408 Process 1015 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:43:33.409 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:43:33.409 Successfully dropped root privileges. 00:43:33.409 avahi-daemon 0.8 starting up. 00:43:33.409 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:43:33.409 Successfully called chroot(). 00:43:33.409 Successfully dropped remaining capabilities. 00:43:33.409 No service file found in /etc/avahi/services. 00:43:33.409 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:43:33.409 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:43:33.409 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:43:33.409 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:43:33.409 Network interface enumeration completed. 00:43:33.409 Registering new address record for fe80::b861:3dff:fef2:9f8a on nvmf_tgt_if2.*. 00:43:33.409 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:43:33.409 Registering new address record for fe80::98ea:d9ff:fed7:19e5 on nvmf_tgt_if.*. 00:43:33.409 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:43:34.342 Server startup complete. Host name is fedora38-cloud-1705279005-2131.local. Local service cookie is 383429932. 00:43:34.342 08:40:07 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:43:34.342 08:40:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:34.342 08:40:07 -- common/autotest_common.sh@10 -- # set +x 00:43:34.343 08:40:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:34.343 08:40:07 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:43:34.343 08:40:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:34.343 08:40:07 -- common/autotest_common.sh@10 -- # set +x 00:43:34.343 08:40:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:34.343 08:40:07 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:43:34.343 08:40:07 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:43:34.343 08:40:07 -- host/mdns_discovery.sh@68 -- # sort 00:43:34.343 08:40:07 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:43:34.343 08:40:07 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:43:34.343 08:40:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:34.343 08:40:07 -- host/mdns_discovery.sh@68 -- # xargs 00:43:34.343 08:40:07 -- common/autotest_common.sh@10 -- # set +x 00:43:34.343 08:40:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:34.343 08:40:07 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:43:34.343 08:40:07 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:43:34.343 08:40:07 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:43:34.343 08:40:07 -- host/mdns_discovery.sh@64 -- # sort 00:43:34.343 08:40:07 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:43:34.343 08:40:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:34.343 08:40:07 -- common/autotest_common.sh@10 -- # set +x 00:43:34.343 08:40:07 -- host/mdns_discovery.sh@64 -- # xargs 00:43:34.343 08:40:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:34.343 08:40:07 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:43:34.343 08:40:07 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:43:34.343 08:40:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:34.343 08:40:07 -- common/autotest_common.sh@10 -- # set +x 00:43:34.343 08:40:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:34.343 08:40:07 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:43:34.343 08:40:07 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:43:34.343 08:40:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:34.343 08:40:07 -- common/autotest_common.sh@10 -- # set +x 00:43:34.343 08:40:07 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:43:34.343 08:40:07 -- host/mdns_discovery.sh@68 -- # xargs 00:43:34.343 08:40:07 -- host/mdns_discovery.sh@68 -- # sort 00:43:34.343 08:40:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:34.602 08:40:07 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:43:34.602 08:40:07 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:43:34.602 08:40:07 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:43:34.602 08:40:07 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:43:34.602 08:40:07 -- host/mdns_discovery.sh@64 -- # xargs 00:43:34.602 08:40:07 -- host/mdns_discovery.sh@64 -- # sort 00:43:34.602 08:40:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:34.602 08:40:07 -- common/autotest_common.sh@10 -- # set +x 00:43:34.602 08:40:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:34.602 08:40:07 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:43:34.602 08:40:07 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:43:34.602 08:40:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:34.602 08:40:07 -- common/autotest_common.sh@10 -- # set +x 00:43:34.602 08:40:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:34.602 08:40:07 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:43:34.602 08:40:07 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:43:34.602 08:40:07 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:43:34.602 08:40:07 -- host/mdns_discovery.sh@68 -- # xargs 00:43:34.602 08:40:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:34.602 08:40:07 -- host/mdns_discovery.sh@68 -- # sort 00:43:34.602 08:40:07 -- common/autotest_common.sh@10 -- # set +x 00:43:34.602 08:40:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:34.602 08:40:07 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:43:34.602 08:40:07 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:43:34.602 08:40:07 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:43:34.602 08:40:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:34.602 08:40:07 -- common/autotest_common.sh@10 -- # set +x 00:43:34.602 08:40:07 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:43:34.602 08:40:07 -- host/mdns_discovery.sh@64 -- # sort 00:43:34.602 08:40:07 -- host/mdns_discovery.sh@64 -- # xargs 00:43:34.602 08:40:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:34.602 [2024-04-17 08:40:07.834662] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:43:34.602 08:40:07 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:43:34.602 08:40:07 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:34.602 08:40:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:34.602 08:40:07 -- common/autotest_common.sh@10 -- # set +x 00:43:34.602 [2024-04-17 08:40:07.860899] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:34.602 08:40:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:34.602 08:40:07 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:43:34.602 08:40:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:34.602 08:40:07 -- common/autotest_common.sh@10 -- # set +x 00:43:34.602 08:40:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:34.602 08:40:07 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:43:34.602 08:40:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:34.602 08:40:07 -- common/autotest_common.sh@10 -- # set +x 00:43:34.602 08:40:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:34.603 08:40:07 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:43:34.603 08:40:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:34.603 08:40:07 -- common/autotest_common.sh@10 -- # set +x 00:43:34.603 08:40:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:34.603 08:40:07 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:43:34.603 08:40:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:34.603 08:40:07 -- common/autotest_common.sh@10 -- # set +x 00:43:34.603 08:40:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:34.603 08:40:07 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:43:34.603 08:40:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:34.603 08:40:07 -- common/autotest_common.sh@10 -- # set +x 00:43:34.603 [2024-04-17 08:40:07.900810] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:43:34.603 08:40:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:34.603 08:40:07 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:43:34.603 08:40:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:34.603 08:40:07 -- common/autotest_common.sh@10 -- # set +x 00:43:34.603 [2024-04-17 08:40:07.912748] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:43:34.603 08:40:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:34.603 08:40:07 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=85724 00:43:34.603 08:40:07 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:43:34.603 08:40:07 -- host/mdns_discovery.sh@125 -- # sleep 5 00:43:35.540 [2024-04-17 08:40:08.732937] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:43:35.540 Established under name 'CDC' 00:43:36.109 [2024-04-17 08:40:09.132200] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:43:36.109 [2024-04-17 08:40:09.132352] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:43:36.109 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:43:36.109 cookie is 0 00:43:36.109 is_local: 1 00:43:36.109 our_own: 0 00:43:36.109 wide_area: 0 00:43:36.109 multicast: 1 00:43:36.109 cached: 1 00:43:36.109 [2024-04-17 08:40:09.231991] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:43:36.109 [2024-04-17 08:40:09.232099] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:43:36.109 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:43:36.109 cookie is 0 00:43:36.109 is_local: 1 00:43:36.109 our_own: 0 00:43:36.109 wide_area: 0 00:43:36.109 multicast: 1 00:43:36.109 cached: 1 00:43:37.049 [2024-04-17 08:40:10.142943] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:43:37.049 [2024-04-17 08:40:10.143038] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:43:37.049 [2024-04-17 08:40:10.143073] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:43:37.049 [2024-04-17 08:40:10.228904] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:43:37.049 [2024-04-17 08:40:10.242603] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:43:37.049 [2024-04-17 08:40:10.242678] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:43:37.049 [2024-04-17 08:40:10.242710] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:43:37.049 [2024-04-17 08:40:10.295286] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:43:37.049 [2024-04-17 08:40:10.295442] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:43:37.049 [2024-04-17 08:40:10.328552] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:43:37.309 [2024-04-17 08:40:10.383641] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:43:37.309 [2024-04-17 08:40:10.383767] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:43:39.845 08:40:12 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:43:39.845 08:40:12 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:43:39.845 08:40:12 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:43:39.845 08:40:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:39.845 08:40:12 -- host/mdns_discovery.sh@80 -- # sort 00:43:39.845 08:40:12 -- common/autotest_common.sh@10 -- # set +x 00:43:39.845 08:40:12 -- host/mdns_discovery.sh@80 -- # xargs 00:43:39.845 08:40:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:39.845 08:40:12 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:43:39.845 08:40:12 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:43:39.845 08:40:12 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:43:39.845 08:40:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:39.845 08:40:12 -- host/mdns_discovery.sh@76 -- # xargs 00:43:39.845 08:40:12 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:43:39.845 08:40:12 -- common/autotest_common.sh@10 -- # set +x 00:43:39.845 08:40:12 -- host/mdns_discovery.sh@76 -- # sort 00:43:39.845 08:40:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:39.845 08:40:13 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:43:39.845 08:40:13 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:43:39.845 08:40:13 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:43:39.845 08:40:13 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:43:39.845 08:40:13 -- host/mdns_discovery.sh@68 -- # sort 00:43:39.845 08:40:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:39.845 08:40:13 -- host/mdns_discovery.sh@68 -- # xargs 00:43:39.845 08:40:13 -- common/autotest_common.sh@10 -- # set +x 00:43:39.845 08:40:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:39.845 08:40:13 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:43:39.845 08:40:13 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:43:39.845 08:40:13 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:43:39.845 08:40:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:39.845 08:40:13 -- common/autotest_common.sh@10 -- # set +x 00:43:39.845 08:40:13 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:43:39.845 08:40:13 -- host/mdns_discovery.sh@64 -- # sort 00:43:39.845 08:40:13 -- host/mdns_discovery.sh@64 -- # xargs 00:43:39.845 08:40:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:39.845 08:40:13 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:43:39.845 08:40:13 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:43:39.845 08:40:13 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:43:39.845 08:40:13 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:43:39.845 08:40:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:39.845 08:40:13 -- host/mdns_discovery.sh@72 -- # sort -n 00:43:39.845 08:40:13 -- common/autotest_common.sh@10 -- # set +x 00:43:39.845 08:40:13 -- host/mdns_discovery.sh@72 -- # xargs 00:43:39.845 08:40:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:40.104 08:40:13 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:43:40.104 08:40:13 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:43:40.104 08:40:13 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:43:40.104 08:40:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:40.104 08:40:13 -- common/autotest_common.sh@10 -- # set +x 00:43:40.104 08:40:13 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:43:40.104 08:40:13 -- host/mdns_discovery.sh@72 -- # sort -n 00:43:40.104 08:40:13 -- host/mdns_discovery.sh@72 -- # xargs 00:43:40.104 08:40:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:40.104 08:40:13 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:43:40.104 08:40:13 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:43:40.104 08:40:13 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:43:40.104 08:40:13 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:43:40.104 08:40:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:40.104 08:40:13 -- common/autotest_common.sh@10 -- # set +x 00:43:40.104 08:40:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:40.104 08:40:13 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:43:40.104 08:40:13 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:43:40.104 08:40:13 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:43:40.104 08:40:13 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:43:40.104 08:40:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:40.104 08:40:13 -- common/autotest_common.sh@10 -- # set +x 00:43:40.104 08:40:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:40.104 08:40:13 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:43:40.104 08:40:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:40.104 08:40:13 -- common/autotest_common.sh@10 -- # set +x 00:43:40.104 08:40:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:40.104 08:40:13 -- host/mdns_discovery.sh@139 -- # sleep 1 00:43:41.043 08:40:14 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:43:41.043 08:40:14 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:43:41.043 08:40:14 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:43:41.043 08:40:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:41.043 08:40:14 -- host/mdns_discovery.sh@64 -- # sort 00:43:41.043 08:40:14 -- common/autotest_common.sh@10 -- # set +x 00:43:41.043 08:40:14 -- host/mdns_discovery.sh@64 -- # xargs 00:43:41.302 08:40:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:41.302 08:40:14 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:43:41.302 08:40:14 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:43:41.302 08:40:14 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:43:41.302 08:40:14 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:43:41.302 08:40:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:41.302 08:40:14 -- common/autotest_common.sh@10 -- # set +x 00:43:41.302 08:40:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:41.302 08:40:14 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:43:41.302 08:40:14 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:43:41.302 08:40:14 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:43:41.302 08:40:14 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:43:41.302 08:40:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:41.302 08:40:14 -- common/autotest_common.sh@10 -- # set +x 00:43:41.302 [2024-04-17 08:40:14.455158] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:43:41.302 [2024-04-17 08:40:14.456272] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:43:41.302 [2024-04-17 08:40:14.456302] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:43:41.302 [2024-04-17 08:40:14.456330] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:43:41.302 [2024-04-17 08:40:14.456342] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:43:41.302 08:40:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:41.302 08:40:14 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:43:41.302 08:40:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:41.302 08:40:14 -- common/autotest_common.sh@10 -- # set +x 00:43:41.302 [2024-04-17 08:40:14.467074] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:43:41.302 [2024-04-17 08:40:14.467235] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:43:41.302 [2024-04-17 08:40:14.467269] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:43:41.302 08:40:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:41.302 08:40:14 -- host/mdns_discovery.sh@149 -- # sleep 1 00:43:41.302 [2024-04-17 08:40:14.598100] bdev_nvme.c:6677:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:43:41.302 [2024-04-17 08:40:14.598266] bdev_nvme.c:6677:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:43:41.562 [2024-04-17 08:40:14.657233] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:43:41.562 [2024-04-17 08:40:14.657259] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:43:41.562 [2024-04-17 08:40:14.657263] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:43:41.562 [2024-04-17 08:40:14.657278] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:43:41.562 [2024-04-17 08:40:14.657307] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:43:41.562 [2024-04-17 08:40:14.657313] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:43:41.562 [2024-04-17 08:40:14.657317] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:43:41.562 [2024-04-17 08:40:14.657327] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:43:41.562 [2024-04-17 08:40:14.703023] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:43:41.562 [2024-04-17 08:40:14.703045] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:43:41.562 [2024-04-17 08:40:14.703075] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:43:41.562 [2024-04-17 08:40:14.703080] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:43:42.501 08:40:15 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:43:42.501 08:40:15 -- host/mdns_discovery.sh@68 -- # sort 00:43:42.501 08:40:15 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:43:42.501 08:40:15 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:43:42.501 08:40:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:42.501 08:40:15 -- common/autotest_common.sh@10 -- # set +x 00:43:42.501 08:40:15 -- host/mdns_discovery.sh@68 -- # xargs 00:43:42.501 08:40:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:42.501 08:40:15 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:43:42.501 08:40:15 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:43:42.501 08:40:15 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:43:42.501 08:40:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:42.501 08:40:15 -- common/autotest_common.sh@10 -- # set +x 00:43:42.501 08:40:15 -- host/mdns_discovery.sh@64 -- # sort 00:43:42.501 08:40:15 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:43:42.501 08:40:15 -- host/mdns_discovery.sh@64 -- # xargs 00:43:42.501 08:40:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:42.501 08:40:15 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:43:42.501 08:40:15 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:43:42.501 08:40:15 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:43:42.501 08:40:15 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:43:42.501 08:40:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:42.501 08:40:15 -- host/mdns_discovery.sh@72 -- # sort -n 00:43:42.501 08:40:15 -- common/autotest_common.sh@10 -- # set +x 00:43:42.501 08:40:15 -- host/mdns_discovery.sh@72 -- # xargs 00:43:42.501 08:40:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:42.501 08:40:15 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:43:42.501 08:40:15 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:43:42.501 08:40:15 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:43:42.501 08:40:15 -- host/mdns_discovery.sh@72 -- # xargs 00:43:42.502 08:40:15 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:43:42.502 08:40:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:42.502 08:40:15 -- host/mdns_discovery.sh@72 -- # sort -n 00:43:42.502 08:40:15 -- common/autotest_common.sh@10 -- # set +x 00:43:42.502 08:40:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:42.502 08:40:15 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:43:42.502 08:40:15 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:43:42.502 08:40:15 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:43:42.502 08:40:15 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:43:42.502 08:40:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:42.502 08:40:15 -- common/autotest_common.sh@10 -- # set +x 00:43:42.502 08:40:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:42.502 08:40:15 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:43:42.502 08:40:15 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:43:42.502 08:40:15 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:43:42.502 08:40:15 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:42.502 08:40:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:42.502 08:40:15 -- common/autotest_common.sh@10 -- # set +x 00:43:42.502 [2024-04-17 08:40:15.770019] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:43:42.502 [2024-04-17 08:40:15.770051] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:43:42.502 [2024-04-17 08:40:15.770079] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:43:42.502 [2024-04-17 08:40:15.770090] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:43:42.502 [2024-04-17 08:40:15.772787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:43:42.502 [2024-04-17 08:40:15.772821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:42.502 [2024-04-17 08:40:15.772831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:43:42.502 [2024-04-17 08:40:15.772838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:42.502 [2024-04-17 08:40:15.772846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:43:42.502 [2024-04-17 08:40:15.772852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:42.502 [2024-04-17 08:40:15.772859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:43:42.502 [2024-04-17 08:40:15.772866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:42.502 [2024-04-17 08:40:15.772873] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24fcf40 is same with the state(5) to be set 00:43:42.502 08:40:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:42.502 08:40:15 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:43:42.502 08:40:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:42.502 08:40:15 -- common/autotest_common.sh@10 -- # set +x 00:43:42.502 [2024-04-17 08:40:15.782029] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:43:42.502 [2024-04-17 08:40:15.782068] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:43:42.502 [2024-04-17 08:40:15.782733] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24fcf40 (9): Bad file descriptor 00:43:42.502 [2024-04-17 08:40:15.784613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:43:42.502 [2024-04-17 08:40:15.784639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:42.502 [2024-04-17 08:40:15.784647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:43:42.502 [2024-04-17 08:40:15.784654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:42.502 [2024-04-17 08:40:15.784662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:43:42.502 [2024-04-17 08:40:15.784668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:42.502 [2024-04-17 08:40:15.784675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:43:42.502 [2024-04-17 08:40:15.784682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:42.502 [2024-04-17 08:40:15.784689] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459480 is same with the state(5) to be set 00:43:42.502 08:40:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:42.502 08:40:15 -- host/mdns_discovery.sh@162 -- # sleep 1 00:43:42.502 [2024-04-17 08:40:15.792738] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:43:42.502 [2024-04-17 08:40:15.792911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.502 [2024-04-17 08:40:15.792976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.502 [2024-04-17 08:40:15.793014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24fcf40 with addr=10.0.0.2, port=4420 00:43:42.502 [2024-04-17 08:40:15.793058] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24fcf40 is same with the state(5) to be set 00:43:42.502 [2024-04-17 08:40:15.793102] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24fcf40 (9): Bad file descriptor 00:43:42.502 [2024-04-17 08:40:15.793162] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:43:42.502 [2024-04-17 08:40:15.793204] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:43:42.502 [2024-04-17 08:40:15.793249] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:43:42.502 [2024-04-17 08:40:15.793284] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:42.502 [2024-04-17 08:40:15.794568] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2459480 (9): Bad file descriptor 00:43:42.502 [2024-04-17 08:40:15.802840] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:43:42.502 [2024-04-17 08:40:15.802977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.502 [2024-04-17 08:40:15.803043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.502 [2024-04-17 08:40:15.803080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24fcf40 with addr=10.0.0.2, port=4420 00:43:42.502 [2024-04-17 08:40:15.803124] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24fcf40 is same with the state(5) to be set 00:43:42.502 [2024-04-17 08:40:15.803140] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24fcf40 (9): Bad file descriptor 00:43:42.502 [2024-04-17 08:40:15.803163] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:43:42.502 [2024-04-17 08:40:15.803169] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:43:42.502 [2024-04-17 08:40:15.803177] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:43:42.502 [2024-04-17 08:40:15.803189] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:42.502 [2024-04-17 08:40:15.804559] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:43:42.502 [2024-04-17 08:40:15.804657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.502 [2024-04-17 08:40:15.804718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.502 [2024-04-17 08:40:15.804755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2459480 with addr=10.0.0.3, port=4420 00:43:42.502 [2024-04-17 08:40:15.804797] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459480 is same with the state(5) to be set 00:43:42.502 [2024-04-17 08:40:15.804841] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2459480 (9): Bad file descriptor 00:43:42.502 [2024-04-17 08:40:15.804889] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:43:42.502 [2024-04-17 08:40:15.804934] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:43:42.502 [2024-04-17 08:40:15.804975] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:43:42.502 [2024-04-17 08:40:15.805006] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:42.502 [2024-04-17 08:40:15.812919] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:43:42.502 [2024-04-17 08:40:15.813045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.502 [2024-04-17 08:40:15.813124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.502 [2024-04-17 08:40:15.813161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24fcf40 with addr=10.0.0.2, port=4420 00:43:42.502 [2024-04-17 08:40:15.813204] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24fcf40 is same with the state(5) to be set 00:43:42.502 [2024-04-17 08:40:15.813253] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24fcf40 (9): Bad file descriptor 00:43:42.502 [2024-04-17 08:40:15.813307] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:43:42.502 [2024-04-17 08:40:15.813350] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:43:42.502 [2024-04-17 08:40:15.813402] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:43:42.502 [2024-04-17 08:40:15.813436] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:42.502 [2024-04-17 08:40:15.814612] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:43:42.502 [2024-04-17 08:40:15.814720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.502 [2024-04-17 08:40:15.814780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.502 [2024-04-17 08:40:15.814817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2459480 with addr=10.0.0.3, port=4420 00:43:42.502 [2024-04-17 08:40:15.814859] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459480 is same with the state(5) to be set 00:43:42.502 [2024-04-17 08:40:15.814904] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2459480 (9): Bad file descriptor 00:43:42.502 [2024-04-17 08:40:15.814952] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:43:42.502 [2024-04-17 08:40:15.814990] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:43:42.502 [2024-04-17 08:40:15.815029] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:43:42.502 [2024-04-17 08:40:15.815060] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:42.502 [2024-04-17 08:40:15.822997] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:43:42.503 [2024-04-17 08:40:15.823120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.503 [2024-04-17 08:40:15.823184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.503 [2024-04-17 08:40:15.823220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24fcf40 with addr=10.0.0.2, port=4420 00:43:42.503 [2024-04-17 08:40:15.823264] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24fcf40 is same with the state(5) to be set 00:43:42.503 [2024-04-17 08:40:15.823336] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24fcf40 (9): Bad file descriptor 00:43:42.503 [2024-04-17 08:40:15.823407] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:43:42.503 [2024-04-17 08:40:15.823452] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:43:42.503 [2024-04-17 08:40:15.823515] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:43:42.503 [2024-04-17 08:40:15.823543] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:42.503 [2024-04-17 08:40:15.824675] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:43:42.503 [2024-04-17 08:40:15.824768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.503 [2024-04-17 08:40:15.824830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.503 [2024-04-17 08:40:15.824865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2459480 with addr=10.0.0.3, port=4420 00:43:42.503 [2024-04-17 08:40:15.824897] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459480 is same with the state(5) to be set 00:43:42.503 [2024-04-17 08:40:15.824909] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2459480 (9): Bad file descriptor 00:43:42.503 [2024-04-17 08:40:15.824919] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:43:42.503 [2024-04-17 08:40:15.824925] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:43:42.503 [2024-04-17 08:40:15.824932] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:43:42.503 [2024-04-17 08:40:15.824943] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:42.763 [2024-04-17 08:40:15.833079] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:43:42.763 [2024-04-17 08:40:15.833164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.763 [2024-04-17 08:40:15.833193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.763 [2024-04-17 08:40:15.833201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24fcf40 with addr=10.0.0.2, port=4420 00:43:42.763 [2024-04-17 08:40:15.833208] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24fcf40 is same with the state(5) to be set 00:43:42.763 [2024-04-17 08:40:15.833219] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24fcf40 (9): Bad file descriptor 00:43:42.763 [2024-04-17 08:40:15.833228] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:43:42.763 [2024-04-17 08:40:15.833233] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:43:42.763 [2024-04-17 08:40:15.833240] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:43:42.763 [2024-04-17 08:40:15.833250] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:42.763 [2024-04-17 08:40:15.834728] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:43:42.763 [2024-04-17 08:40:15.834792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.763 [2024-04-17 08:40:15.834821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.763 [2024-04-17 08:40:15.834830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2459480 with addr=10.0.0.3, port=4420 00:43:42.763 [2024-04-17 08:40:15.834836] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459480 is same with the state(5) to be set 00:43:42.763 [2024-04-17 08:40:15.834847] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2459480 (9): Bad file descriptor 00:43:42.763 [2024-04-17 08:40:15.834856] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:43:42.763 [2024-04-17 08:40:15.834862] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:43:42.763 [2024-04-17 08:40:15.834868] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:43:42.763 [2024-04-17 08:40:15.834879] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:42.763 [2024-04-17 08:40:15.843108] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:43:42.763 [2024-04-17 08:40:15.843176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.763 [2024-04-17 08:40:15.843205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.763 [2024-04-17 08:40:15.843214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24fcf40 with addr=10.0.0.2, port=4420 00:43:42.763 [2024-04-17 08:40:15.843221] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24fcf40 is same with the state(5) to be set 00:43:42.763 [2024-04-17 08:40:15.843232] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24fcf40 (9): Bad file descriptor 00:43:42.763 [2024-04-17 08:40:15.843241] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:43:42.763 [2024-04-17 08:40:15.843247] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:43:42.764 [2024-04-17 08:40:15.843253] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:43:42.764 [2024-04-17 08:40:15.843263] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:42.764 [2024-04-17 08:40:15.844749] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:43:42.764 [2024-04-17 08:40:15.844803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.764 [2024-04-17 08:40:15.844830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.764 [2024-04-17 08:40:15.844839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2459480 with addr=10.0.0.3, port=4420 00:43:42.764 [2024-04-17 08:40:15.844846] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459480 is same with the state(5) to be set 00:43:42.764 [2024-04-17 08:40:15.844856] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2459480 (9): Bad file descriptor 00:43:42.764 [2024-04-17 08:40:15.844865] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:43:42.764 [2024-04-17 08:40:15.844871] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:43:42.764 [2024-04-17 08:40:15.844877] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:43:42.764 [2024-04-17 08:40:15.844887] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:42.764 [2024-04-17 08:40:15.853134] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:43:42.764 [2024-04-17 08:40:15.853206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.764 [2024-04-17 08:40:15.853235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.764 [2024-04-17 08:40:15.853244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24fcf40 with addr=10.0.0.2, port=4420 00:43:42.764 [2024-04-17 08:40:15.853251] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24fcf40 is same with the state(5) to be set 00:43:42.764 [2024-04-17 08:40:15.853262] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24fcf40 (9): Bad file descriptor 00:43:42.764 [2024-04-17 08:40:15.853272] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:43:42.764 [2024-04-17 08:40:15.853277] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:43:42.764 [2024-04-17 08:40:15.853284] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:43:42.764 [2024-04-17 08:40:15.853294] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:42.764 [2024-04-17 08:40:15.854768] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:43:42.764 [2024-04-17 08:40:15.854832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.764 [2024-04-17 08:40:15.854861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.764 [2024-04-17 08:40:15.854870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2459480 with addr=10.0.0.3, port=4420 00:43:42.764 [2024-04-17 08:40:15.854877] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459480 is same with the state(5) to be set 00:43:42.764 [2024-04-17 08:40:15.854887] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2459480 (9): Bad file descriptor 00:43:42.764 [2024-04-17 08:40:15.854897] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:43:42.764 [2024-04-17 08:40:15.854902] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:43:42.764 [2024-04-17 08:40:15.854909] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:43:42.764 [2024-04-17 08:40:15.854918] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:42.764 [2024-04-17 08:40:15.863163] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:43:42.764 [2024-04-17 08:40:15.863232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.764 [2024-04-17 08:40:15.863260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.764 [2024-04-17 08:40:15.863269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24fcf40 with addr=10.0.0.2, port=4420 00:43:42.764 [2024-04-17 08:40:15.863277] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24fcf40 is same with the state(5) to be set 00:43:42.764 [2024-04-17 08:40:15.863288] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24fcf40 (9): Bad file descriptor 00:43:42.764 [2024-04-17 08:40:15.863298] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:43:42.764 [2024-04-17 08:40:15.863304] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:43:42.764 [2024-04-17 08:40:15.863310] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:43:42.764 [2024-04-17 08:40:15.863331] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:42.764 [2024-04-17 08:40:15.864792] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:43:42.764 [2024-04-17 08:40:15.864846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.764 [2024-04-17 08:40:15.864873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.764 [2024-04-17 08:40:15.864882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2459480 with addr=10.0.0.3, port=4420 00:43:42.764 [2024-04-17 08:40:15.864889] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459480 is same with the state(5) to be set 00:43:42.764 [2024-04-17 08:40:15.864899] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2459480 (9): Bad file descriptor 00:43:42.764 [2024-04-17 08:40:15.864908] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:43:42.764 [2024-04-17 08:40:15.864914] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:43:42.764 [2024-04-17 08:40:15.864920] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:43:42.764 [2024-04-17 08:40:15.864930] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:42.764 [2024-04-17 08:40:15.873191] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:43:42.764 [2024-04-17 08:40:15.873267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.764 [2024-04-17 08:40:15.873297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.764 [2024-04-17 08:40:15.873306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24fcf40 with addr=10.0.0.2, port=4420 00:43:42.764 [2024-04-17 08:40:15.873313] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24fcf40 is same with the state(5) to be set 00:43:42.764 [2024-04-17 08:40:15.873323] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24fcf40 (9): Bad file descriptor 00:43:42.764 [2024-04-17 08:40:15.873366] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:43:42.764 [2024-04-17 08:40:15.873374] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:43:42.764 [2024-04-17 08:40:15.873380] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:43:42.764 [2024-04-17 08:40:15.873416] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:42.764 [2024-04-17 08:40:15.874811] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:43:42.764 [2024-04-17 08:40:15.874877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.764 [2024-04-17 08:40:15.874905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.764 [2024-04-17 08:40:15.874914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2459480 with addr=10.0.0.3, port=4420 00:43:42.764 [2024-04-17 08:40:15.874921] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459480 is same with the state(5) to be set 00:43:42.764 [2024-04-17 08:40:15.874933] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2459480 (9): Bad file descriptor 00:43:42.764 [2024-04-17 08:40:15.874942] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:43:42.764 [2024-04-17 08:40:15.874948] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:43:42.764 [2024-04-17 08:40:15.874954] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:43:42.764 [2024-04-17 08:40:15.874965] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:42.764 [2024-04-17 08:40:15.883223] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:43:42.764 [2024-04-17 08:40:15.883296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.764 [2024-04-17 08:40:15.883326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.764 [2024-04-17 08:40:15.883335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24fcf40 with addr=10.0.0.2, port=4420 00:43:42.764 [2024-04-17 08:40:15.883342] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24fcf40 is same with the state(5) to be set 00:43:42.764 [2024-04-17 08:40:15.883353] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24fcf40 (9): Bad file descriptor 00:43:42.764 [2024-04-17 08:40:15.883373] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:43:42.764 [2024-04-17 08:40:15.883379] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:43:42.764 [2024-04-17 08:40:15.883386] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:43:42.764 [2024-04-17 08:40:15.883413] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:42.764 [2024-04-17 08:40:15.884834] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:43:42.764 [2024-04-17 08:40:15.884886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.764 [2024-04-17 08:40:15.884912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.764 [2024-04-17 08:40:15.884921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2459480 with addr=10.0.0.3, port=4420 00:43:42.764 [2024-04-17 08:40:15.884927] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459480 is same with the state(5) to be set 00:43:42.764 [2024-04-17 08:40:15.884938] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2459480 (9): Bad file descriptor 00:43:42.764 [2024-04-17 08:40:15.884947] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:43:42.765 [2024-04-17 08:40:15.884952] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:43:42.765 [2024-04-17 08:40:15.884959] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:43:42.765 [2024-04-17 08:40:15.884968] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:42.765 [2024-04-17 08:40:15.893251] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:43:42.765 [2024-04-17 08:40:15.893318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.765 [2024-04-17 08:40:15.893348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.765 [2024-04-17 08:40:15.893357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24fcf40 with addr=10.0.0.2, port=4420 00:43:42.765 [2024-04-17 08:40:15.893364] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24fcf40 is same with the state(5) to be set 00:43:42.765 [2024-04-17 08:40:15.893375] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24fcf40 (9): Bad file descriptor 00:43:42.765 [2024-04-17 08:40:15.893418] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:43:42.765 [2024-04-17 08:40:15.893426] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:43:42.765 [2024-04-17 08:40:15.893432] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:43:42.765 [2024-04-17 08:40:15.893442] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:42.765 [2024-04-17 08:40:15.894851] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:43:42.765 [2024-04-17 08:40:15.894913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.765 [2024-04-17 08:40:15.894941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.765 [2024-04-17 08:40:15.894951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2459480 with addr=10.0.0.3, port=4420 00:43:42.765 [2024-04-17 08:40:15.894957] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459480 is same with the state(5) to be set 00:43:42.765 [2024-04-17 08:40:15.894968] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2459480 (9): Bad file descriptor 00:43:42.765 [2024-04-17 08:40:15.894978] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:43:42.765 [2024-04-17 08:40:15.894983] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:43:42.765 [2024-04-17 08:40:15.894989] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:43:42.765 [2024-04-17 08:40:15.894999] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:42.765 [2024-04-17 08:40:15.903278] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:43:42.765 [2024-04-17 08:40:15.903346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.765 [2024-04-17 08:40:15.903376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.765 [2024-04-17 08:40:15.903386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24fcf40 with addr=10.0.0.2, port=4420 00:43:42.765 [2024-04-17 08:40:15.903403] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24fcf40 is same with the state(5) to be set 00:43:42.765 [2024-04-17 08:40:15.903416] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24fcf40 (9): Bad file descriptor 00:43:42.765 [2024-04-17 08:40:15.903437] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:43:42.765 [2024-04-17 08:40:15.903443] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:43:42.765 [2024-04-17 08:40:15.903449] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:43:42.765 [2024-04-17 08:40:15.903459] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:42.765 [2024-04-17 08:40:15.904872] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:43:42.765 [2024-04-17 08:40:15.904923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.765 [2024-04-17 08:40:15.904949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.765 [2024-04-17 08:40:15.904959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2459480 with addr=10.0.0.3, port=4420 00:43:42.765 [2024-04-17 08:40:15.904965] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459480 is same with the state(5) to be set 00:43:42.765 [2024-04-17 08:40:15.904975] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2459480 (9): Bad file descriptor 00:43:42.765 [2024-04-17 08:40:15.904985] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:43:42.765 [2024-04-17 08:40:15.904990] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:43:42.765 [2024-04-17 08:40:15.904996] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:43:42.765 [2024-04-17 08:40:15.905006] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:42.765 [2024-04-17 08:40:15.913308] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:43:42.765 [2024-04-17 08:40:15.913414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.765 [2024-04-17 08:40:15.913467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:42.765 [2024-04-17 08:40:15.913478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24fcf40 with addr=10.0.0.2, port=4420 00:43:42.765 [2024-04-17 08:40:15.913486] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24fcf40 is same with the state(5) to be set 00:43:42.765 [2024-04-17 08:40:15.913507] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24fcf40 (9): Bad file descriptor 00:43:42.765 [2024-04-17 08:40:15.913562] bdev_nvme.c:6540:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:43:42.765 [2024-04-17 08:40:15.913578] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:43:42.765 [2024-04-17 08:40:15.913625] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:43:42.765 [2024-04-17 08:40:15.913650] bdev_nvme.c:6540:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:43:42.765 [2024-04-17 08:40:15.913662] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:43:42.765 [2024-04-17 08:40:15.913672] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:43:42.765 [2024-04-17 08:40:15.913695] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:43:42.765 [2024-04-17 08:40:15.913702] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:43:42.765 [2024-04-17 08:40:15.913709] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:43:42.765 [2024-04-17 08:40:15.913729] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:42.765 [2024-04-17 08:40:16.000497] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:43:42.765 [2024-04-17 08:40:16.000560] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:43:43.698 08:40:16 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:43:43.698 08:40:16 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:43:43.698 08:40:16 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:43:43.698 08:40:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:43.698 08:40:16 -- common/autotest_common.sh@10 -- # set +x 00:43:43.698 08:40:16 -- host/mdns_discovery.sh@68 -- # sort 00:43:43.698 08:40:16 -- host/mdns_discovery.sh@68 -- # xargs 00:43:43.698 08:40:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:43.698 08:40:16 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:43:43.698 08:40:16 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:43:43.698 08:40:16 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:43:43.698 08:40:16 -- host/mdns_discovery.sh@64 -- # sort 00:43:43.698 08:40:16 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:43:43.698 08:40:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:43.698 08:40:16 -- common/autotest_common.sh@10 -- # set +x 00:43:43.698 08:40:16 -- host/mdns_discovery.sh@64 -- # xargs 00:43:43.698 08:40:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:43.698 08:40:16 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:43:43.698 08:40:16 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:43:43.698 08:40:16 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:43:43.698 08:40:16 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:43:43.698 08:40:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:43.698 08:40:16 -- common/autotest_common.sh@10 -- # set +x 00:43:43.698 08:40:16 -- host/mdns_discovery.sh@72 -- # sort -n 00:43:43.698 08:40:16 -- host/mdns_discovery.sh@72 -- # xargs 00:43:43.698 08:40:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:43.698 08:40:16 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:43:43.698 08:40:16 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:43:43.698 08:40:16 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:43:43.698 08:40:16 -- host/mdns_discovery.sh@72 -- # xargs 00:43:43.698 08:40:16 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:43:43.698 08:40:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:43.698 08:40:16 -- host/mdns_discovery.sh@72 -- # sort -n 00:43:43.698 08:40:16 -- common/autotest_common.sh@10 -- # set +x 00:43:43.698 08:40:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:43.698 08:40:17 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:43:43.698 08:40:17 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:43:43.698 08:40:17 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:43:43.698 08:40:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:43.698 08:40:17 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:43:43.698 08:40:17 -- common/autotest_common.sh@10 -- # set +x 00:43:43.698 08:40:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:43.956 08:40:17 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:43:43.956 08:40:17 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:43:43.956 08:40:17 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:43:43.956 08:40:17 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:43:43.956 08:40:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:43.956 08:40:17 -- common/autotest_common.sh@10 -- # set +x 00:43:43.956 08:40:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:43.956 08:40:17 -- host/mdns_discovery.sh@172 -- # sleep 1 00:43:43.956 [2024-04-17 08:40:17.116835] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:43:44.902 08:40:18 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:43:44.902 08:40:18 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:43:44.902 08:40:18 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:43:44.902 08:40:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:44.902 08:40:18 -- host/mdns_discovery.sh@80 -- # sort 00:43:44.902 08:40:18 -- common/autotest_common.sh@10 -- # set +x 00:43:44.902 08:40:18 -- host/mdns_discovery.sh@80 -- # xargs 00:43:44.902 08:40:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:44.902 08:40:18 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:43:44.902 08:40:18 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:43:44.902 08:40:18 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:43:44.903 08:40:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:44.903 08:40:18 -- common/autotest_common.sh@10 -- # set +x 00:43:44.903 08:40:18 -- host/mdns_discovery.sh@68 -- # sort 00:43:44.903 08:40:18 -- host/mdns_discovery.sh@68 -- # xargs 00:43:44.903 08:40:18 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:43:44.903 08:40:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:44.903 08:40:18 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:43:44.903 08:40:18 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:43:44.903 08:40:18 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:43:44.903 08:40:18 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:43:44.903 08:40:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:44.903 08:40:18 -- host/mdns_discovery.sh@64 -- # xargs 00:43:44.903 08:40:18 -- common/autotest_common.sh@10 -- # set +x 00:43:44.903 08:40:18 -- host/mdns_discovery.sh@64 -- # sort 00:43:44.903 08:40:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:45.161 08:40:18 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:43:45.161 08:40:18 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:43:45.161 08:40:18 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:43:45.161 08:40:18 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:43:45.161 08:40:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:45.161 08:40:18 -- common/autotest_common.sh@10 -- # set +x 00:43:45.161 08:40:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:45.161 08:40:18 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:43:45.161 08:40:18 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:43:45.161 08:40:18 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:43:45.161 08:40:18 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:43:45.161 08:40:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:45.161 08:40:18 -- common/autotest_common.sh@10 -- # set +x 00:43:45.161 08:40:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:45.161 08:40:18 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:43:45.161 08:40:18 -- common/autotest_common.sh@640 -- # local es=0 00:43:45.161 08:40:18 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:43:45.161 08:40:18 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:43:45.161 08:40:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:43:45.161 08:40:18 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:43:45.161 08:40:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:43:45.161 08:40:18 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:43:45.161 08:40:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:45.161 08:40:18 -- common/autotest_common.sh@10 -- # set +x 00:43:45.161 [2024-04-17 08:40:18.297640] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:43:45.161 2024/04/17 08:40:18 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:43:45.161 request: 00:43:45.161 { 00:43:45.161 "method": "bdev_nvme_start_mdns_discovery", 00:43:45.161 "params": { 00:43:45.161 "name": "mdns", 00:43:45.161 "svcname": "_nvme-disc._http", 00:43:45.161 "hostnqn": "nqn.2021-12.io.spdk:test" 00:43:45.161 } 00:43:45.161 } 00:43:45.161 Got JSON-RPC error response 00:43:45.161 GoRPCClient: error on JSON-RPC call 00:43:45.161 08:40:18 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:43:45.161 08:40:18 -- common/autotest_common.sh@643 -- # es=1 00:43:45.161 08:40:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:43:45.161 08:40:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:43:45.161 08:40:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:43:45.161 08:40:18 -- host/mdns_discovery.sh@183 -- # sleep 5 00:43:45.420 [2024-04-17 08:40:18.681424] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:43:45.678 [2024-04-17 08:40:18.781223] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:43:45.678 [2024-04-17 08:40:18.881054] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:43:45.678 [2024-04-17 08:40:18.881091] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:43:45.678 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:43:45.678 cookie is 0 00:43:45.678 is_local: 1 00:43:45.678 our_own: 0 00:43:45.678 wide_area: 0 00:43:45.678 multicast: 1 00:43:45.678 cached: 1 00:43:45.678 [2024-04-17 08:40:18.980863] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:43:45.678 [2024-04-17 08:40:18.980917] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:43:45.678 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:43:45.678 cookie is 0 00:43:45.678 is_local: 1 00:43:45.678 our_own: 0 00:43:45.678 wide_area: 0 00:43:45.678 multicast: 1 00:43:45.678 cached: 1 00:43:46.614 [2024-04-17 08:40:19.885856] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:43:46.614 [2024-04-17 08:40:19.885895] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:43:46.614 [2024-04-17 08:40:19.885915] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:43:46.872 [2024-04-17 08:40:19.971835] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:43:46.872 [2024-04-17 08:40:19.985510] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:43:46.872 [2024-04-17 08:40:19.985530] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:43:46.872 [2024-04-17 08:40:19.985546] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:43:46.872 [2024-04-17 08:40:20.035371] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:43:46.872 [2024-04-17 08:40:20.035415] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:43:46.872 [2024-04-17 08:40:20.072445] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:43:46.872 [2024-04-17 08:40:20.131555] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:43:46.872 [2024-04-17 08:40:20.131594] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:43:50.161 08:40:23 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:43:50.161 08:40:23 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:43:50.161 08:40:23 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:43:50.161 08:40:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:50.161 08:40:23 -- common/autotest_common.sh@10 -- # set +x 00:43:50.161 08:40:23 -- host/mdns_discovery.sh@80 -- # sort 00:43:50.161 08:40:23 -- host/mdns_discovery.sh@80 -- # xargs 00:43:50.161 08:40:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:50.161 08:40:23 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:43:50.161 08:40:23 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:43:50.161 08:40:23 -- host/mdns_discovery.sh@76 -- # xargs 00:43:50.161 08:40:23 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:43:50.161 08:40:23 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:43:50.161 08:40:23 -- host/mdns_discovery.sh@76 -- # sort 00:43:50.161 08:40:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:50.161 08:40:23 -- common/autotest_common.sh@10 -- # set +x 00:43:50.161 08:40:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:50.161 08:40:23 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:43:50.161 08:40:23 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:43:50.161 08:40:23 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:43:50.161 08:40:23 -- host/mdns_discovery.sh@64 -- # sort 00:43:50.161 08:40:23 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:43:50.161 08:40:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:50.161 08:40:23 -- common/autotest_common.sh@10 -- # set +x 00:43:50.161 08:40:23 -- host/mdns_discovery.sh@64 -- # xargs 00:43:50.161 08:40:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:50.161 08:40:23 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:43:50.161 08:40:23 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:43:50.161 08:40:23 -- common/autotest_common.sh@640 -- # local es=0 00:43:50.161 08:40:23 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:43:50.161 08:40:23 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:43:50.161 08:40:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:43:50.161 08:40:23 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:43:50.161 08:40:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:43:50.161 08:40:23 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:43:50.161 08:40:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:50.161 08:40:23 -- common/autotest_common.sh@10 -- # set +x 00:43:50.161 [2024-04-17 08:40:23.483106] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:43:50.161 2024/04/17 08:40:23 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:43:50.161 request: 00:43:50.161 { 00:43:50.161 "method": "bdev_nvme_start_mdns_discovery", 00:43:50.161 "params": { 00:43:50.161 "name": "cdc", 00:43:50.161 "svcname": "_nvme-disc._tcp", 00:43:50.161 "hostnqn": "nqn.2021-12.io.spdk:test" 00:43:50.161 } 00:43:50.161 } 00:43:50.161 Got JSON-RPC error response 00:43:50.161 GoRPCClient: error on JSON-RPC call 00:43:50.161 08:40:23 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:43:50.161 08:40:23 -- common/autotest_common.sh@643 -- # es=1 00:43:50.161 08:40:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:43:50.161 08:40:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:43:50.161 08:40:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:43:50.161 08:40:23 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:43:50.161 08:40:23 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:43:50.161 08:40:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:50.161 08:40:23 -- common/autotest_common.sh@10 -- # set +x 00:43:50.161 08:40:23 -- host/mdns_discovery.sh@76 -- # sort 00:43:50.420 08:40:23 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:43:50.420 08:40:23 -- host/mdns_discovery.sh@76 -- # xargs 00:43:50.420 08:40:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:50.420 08:40:23 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:43:50.420 08:40:23 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:43:50.420 08:40:23 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:43:50.420 08:40:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:50.420 08:40:23 -- common/autotest_common.sh@10 -- # set +x 00:43:50.420 08:40:23 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:43:50.420 08:40:23 -- host/mdns_discovery.sh@64 -- # sort 00:43:50.420 08:40:23 -- host/mdns_discovery.sh@64 -- # xargs 00:43:50.420 08:40:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:50.420 08:40:23 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:43:50.420 08:40:23 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:43:50.420 08:40:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:43:50.420 08:40:23 -- common/autotest_common.sh@10 -- # set +x 00:43:50.420 08:40:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:43:50.420 08:40:23 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:43:50.420 08:40:23 -- host/mdns_discovery.sh@197 -- # kill 85644 00:43:50.420 08:40:23 -- host/mdns_discovery.sh@200 -- # wait 85644 00:43:50.420 [2024-04-17 08:40:23.695055] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:43:50.679 08:40:23 -- host/mdns_discovery.sh@201 -- # kill 85724 00:43:50.679 Got SIGTERM, quitting. 00:43:50.679 08:40:23 -- host/mdns_discovery.sh@202 -- # kill 85673 00:43:50.679 08:40:23 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:43:50.679 08:40:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:43:50.679 08:40:23 -- nvmf/common.sh@116 -- # sync 00:43:50.679 Got SIGTERM, quitting. 00:43:50.679 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:43:50.679 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:43:50.679 avahi-daemon 0.8 exiting. 00:43:50.679 08:40:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:43:50.679 08:40:23 -- nvmf/common.sh@119 -- # set +e 00:43:50.679 08:40:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:43:50.679 08:40:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:43:50.679 rmmod nvme_tcp 00:43:50.679 rmmod nvme_fabrics 00:43:50.679 rmmod nvme_keyring 00:43:50.679 08:40:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:43:50.679 08:40:23 -- nvmf/common.sh@123 -- # set -e 00:43:50.679 08:40:23 -- nvmf/common.sh@124 -- # return 0 00:43:50.679 08:40:23 -- nvmf/common.sh@477 -- # '[' -n 85594 ']' 00:43:50.679 08:40:23 -- nvmf/common.sh@478 -- # killprocess 85594 00:43:50.679 08:40:23 -- common/autotest_common.sh@926 -- # '[' -z 85594 ']' 00:43:50.679 08:40:23 -- common/autotest_common.sh@930 -- # kill -0 85594 00:43:50.679 08:40:23 -- common/autotest_common.sh@931 -- # uname 00:43:50.679 08:40:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:43:50.679 08:40:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85594 00:43:50.679 killing process with pid 85594 00:43:50.679 08:40:23 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:43:50.679 08:40:23 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:43:50.679 08:40:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85594' 00:43:50.679 08:40:23 -- common/autotest_common.sh@945 -- # kill 85594 00:43:50.679 08:40:23 -- common/autotest_common.sh@950 -- # wait 85594 00:43:51.247 08:40:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:43:51.247 08:40:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:43:51.247 08:40:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:43:51.247 08:40:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:43:51.247 08:40:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:43:51.247 08:40:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:51.247 08:40:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:51.247 08:40:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:51.247 08:40:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:43:51.247 ************************************ 00:43:51.247 END TEST nvmf_mdns_discovery 00:43:51.247 ************************************ 00:43:51.247 00:43:51.247 real 0m20.554s 00:43:51.247 user 0m39.958s 00:43:51.247 sys 0m2.030s 00:43:51.247 08:40:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:51.247 08:40:24 -- common/autotest_common.sh@10 -- # set +x 00:43:51.247 08:40:24 -- nvmf/nvmf.sh@114 -- # [[ 1 -eq 1 ]] 00:43:51.247 08:40:24 -- nvmf/nvmf.sh@115 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:43:51.247 08:40:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:43:51.247 08:40:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:43:51.247 08:40:24 -- common/autotest_common.sh@10 -- # set +x 00:43:51.247 ************************************ 00:43:51.247 START TEST nvmf_multipath 00:43:51.247 ************************************ 00:43:51.247 08:40:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:43:51.247 * Looking for test storage... 00:43:51.247 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:43:51.247 08:40:24 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:43:51.247 08:40:24 -- nvmf/common.sh@7 -- # uname -s 00:43:51.247 08:40:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:51.247 08:40:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:51.247 08:40:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:51.247 08:40:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:51.247 08:40:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:51.247 08:40:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:51.247 08:40:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:51.247 08:40:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:51.247 08:40:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:51.247 08:40:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:51.247 08:40:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:43:51.247 08:40:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:43:51.247 08:40:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:51.247 08:40:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:51.247 08:40:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:43:51.247 08:40:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:43:51.247 08:40:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:51.247 08:40:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:51.247 08:40:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:51.247 08:40:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:51.247 08:40:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:51.247 08:40:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:51.247 08:40:24 -- paths/export.sh@5 -- # export PATH 00:43:51.247 08:40:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:51.507 08:40:24 -- nvmf/common.sh@46 -- # : 0 00:43:51.507 08:40:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:43:51.507 08:40:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:43:51.507 08:40:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:43:51.507 08:40:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:51.507 08:40:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:51.507 08:40:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:43:51.507 08:40:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:43:51.507 08:40:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:43:51.507 08:40:24 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:51.507 08:40:24 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:51.507 08:40:24 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:43:51.507 08:40:24 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:43:51.507 08:40:24 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:43:51.507 08:40:24 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:43:51.507 08:40:24 -- host/multipath.sh@30 -- # nvmftestinit 00:43:51.507 08:40:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:43:51.507 08:40:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:51.507 08:40:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:43:51.507 08:40:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:43:51.507 08:40:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:43:51.507 08:40:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:51.507 08:40:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:51.507 08:40:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:51.507 08:40:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:43:51.507 08:40:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:43:51.507 08:40:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:43:51.507 08:40:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:43:51.507 08:40:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:43:51.507 08:40:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:43:51.507 08:40:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:51.507 08:40:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:51.507 08:40:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:43:51.507 08:40:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:43:51.507 08:40:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:43:51.507 08:40:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:43:51.507 08:40:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:43:51.507 08:40:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:51.507 08:40:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:43:51.507 08:40:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:43:51.507 08:40:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:43:51.507 08:40:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:43:51.507 08:40:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:43:51.507 08:40:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:43:51.507 Cannot find device "nvmf_tgt_br" 00:43:51.507 08:40:24 -- nvmf/common.sh@154 -- # true 00:43:51.507 08:40:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:43:51.507 Cannot find device "nvmf_tgt_br2" 00:43:51.507 08:40:24 -- nvmf/common.sh@155 -- # true 00:43:51.507 08:40:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:43:51.507 08:40:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:43:51.507 Cannot find device "nvmf_tgt_br" 00:43:51.507 08:40:24 -- nvmf/common.sh@157 -- # true 00:43:51.507 08:40:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:43:51.507 Cannot find device "nvmf_tgt_br2" 00:43:51.507 08:40:24 -- nvmf/common.sh@158 -- # true 00:43:51.507 08:40:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:43:51.507 08:40:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:43:51.507 08:40:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:43:51.507 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:43:51.507 08:40:24 -- nvmf/common.sh@161 -- # true 00:43:51.507 08:40:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:43:51.507 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:43:51.507 08:40:24 -- nvmf/common.sh@162 -- # true 00:43:51.507 08:40:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:43:51.507 08:40:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:43:51.507 08:40:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:43:51.507 08:40:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:43:51.507 08:40:24 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:43:51.507 08:40:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:43:51.507 08:40:24 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:43:51.507 08:40:24 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:43:51.767 08:40:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:43:51.767 08:40:24 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:43:51.767 08:40:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:43:51.767 08:40:24 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:43:51.767 08:40:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:43:51.767 08:40:24 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:43:51.767 08:40:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:43:51.767 08:40:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:43:51.767 08:40:24 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:43:51.767 08:40:24 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:43:51.767 08:40:24 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:43:51.767 08:40:24 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:43:51.767 08:40:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:43:51.767 08:40:24 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:43:51.767 08:40:24 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:43:51.767 08:40:24 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:43:51.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:51.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:43:51.767 00:43:51.767 --- 10.0.0.2 ping statistics --- 00:43:51.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:51.767 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:43:51.767 08:40:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:43:51.767 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:43:51.767 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:43:51.767 00:43:51.767 --- 10.0.0.3 ping statistics --- 00:43:51.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:51.767 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:43:51.767 08:40:24 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:43:51.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:51.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:43:51.767 00:43:51.767 --- 10.0.0.1 ping statistics --- 00:43:51.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:51.767 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:43:51.767 08:40:24 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:51.767 08:40:24 -- nvmf/common.sh@421 -- # return 0 00:43:51.767 08:40:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:43:51.767 08:40:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:51.767 08:40:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:43:51.767 08:40:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:43:51.767 08:40:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:51.767 08:40:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:43:51.767 08:40:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:43:51.767 08:40:24 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:43:51.767 08:40:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:43:51.767 08:40:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:43:51.767 08:40:24 -- common/autotest_common.sh@10 -- # set +x 00:43:51.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:51.767 08:40:24 -- nvmf/common.sh@469 -- # nvmfpid=86238 00:43:51.767 08:40:24 -- nvmf/common.sh@470 -- # waitforlisten 86238 00:43:51.767 08:40:24 -- common/autotest_common.sh@819 -- # '[' -z 86238 ']' 00:43:51.767 08:40:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:51.767 08:40:24 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:43:51.767 08:40:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:43:51.767 08:40:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:51.767 08:40:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:43:51.767 08:40:24 -- common/autotest_common.sh@10 -- # set +x 00:43:51.767 [2024-04-17 08:40:24.993236] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:43:51.767 [2024-04-17 08:40:24.993302] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:52.051 [2024-04-17 08:40:25.132891] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:52.051 [2024-04-17 08:40:25.237733] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:43:52.051 [2024-04-17 08:40:25.237861] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:52.051 [2024-04-17 08:40:25.237868] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:52.051 [2024-04-17 08:40:25.237873] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:52.051 [2024-04-17 08:40:25.238874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:43:52.051 [2024-04-17 08:40:25.238877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:43:52.630 08:40:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:43:52.630 08:40:25 -- common/autotest_common.sh@852 -- # return 0 00:43:52.630 08:40:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:43:52.630 08:40:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:43:52.631 08:40:25 -- common/autotest_common.sh@10 -- # set +x 00:43:52.631 08:40:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:52.631 08:40:25 -- host/multipath.sh@33 -- # nvmfapp_pid=86238 00:43:52.631 08:40:25 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:43:52.890 [2024-04-17 08:40:26.087619] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:52.890 08:40:26 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:43:53.149 Malloc0 00:43:53.149 08:40:26 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:43:53.409 08:40:26 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:53.669 08:40:26 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:53.669 [2024-04-17 08:40:26.950023] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:53.669 08:40:26 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:43:53.929 [2024-04-17 08:40:27.141814] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:43:53.929 08:40:27 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:43:53.929 08:40:27 -- host/multipath.sh@44 -- # bdevperf_pid=86336 00:43:53.929 08:40:27 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:43:53.929 08:40:27 -- host/multipath.sh@47 -- # waitforlisten 86336 /var/tmp/bdevperf.sock 00:43:53.929 08:40:27 -- common/autotest_common.sh@819 -- # '[' -z 86336 ']' 00:43:53.929 08:40:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:43:53.929 08:40:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:43:53.929 08:40:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:43:53.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:43:53.929 08:40:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:43:53.929 08:40:27 -- common/autotest_common.sh@10 -- # set +x 00:43:54.869 08:40:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:43:54.869 08:40:28 -- common/autotest_common.sh@852 -- # return 0 00:43:54.869 08:40:28 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:43:55.129 08:40:28 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:43:55.388 Nvme0n1 00:43:55.388 08:40:28 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:43:55.648 Nvme0n1 00:43:55.648 08:40:28 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:43:55.648 08:40:28 -- host/multipath.sh@78 -- # sleep 1 00:43:57.035 08:40:29 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:43:57.035 08:40:29 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:43:57.035 08:40:30 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:43:57.294 08:40:30 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:43:57.294 08:40:30 -- host/multipath.sh@65 -- # dtrace_pid=86418 00:43:57.294 08:40:30 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86238 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:43:57.294 08:40:30 -- host/multipath.sh@66 -- # sleep 6 00:44:03.864 08:40:36 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:44:03.864 08:40:36 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:44:03.864 08:40:36 -- host/multipath.sh@67 -- # active_port=4421 00:44:03.864 08:40:36 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:44:03.864 Attaching 4 probes... 00:44:03.864 @path[10.0.0.2, 4421]: 21269 00:44:03.864 @path[10.0.0.2, 4421]: 21518 00:44:03.864 @path[10.0.0.2, 4421]: 21695 00:44:03.864 @path[10.0.0.2, 4421]: 21939 00:44:03.864 @path[10.0.0.2, 4421]: 23267 00:44:03.864 08:40:36 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:44:03.864 08:40:36 -- host/multipath.sh@69 -- # sed -n 1p 00:44:03.864 08:40:36 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:44:03.864 08:40:36 -- host/multipath.sh@69 -- # port=4421 00:44:03.864 08:40:36 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:44:03.864 08:40:36 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:44:03.864 08:40:36 -- host/multipath.sh@72 -- # kill 86418 00:44:03.864 08:40:36 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:44:03.864 08:40:36 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:44:03.864 08:40:36 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:44:03.864 08:40:36 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:44:03.864 08:40:37 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:44:03.864 08:40:37 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86238 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:44:03.864 08:40:37 -- host/multipath.sh@65 -- # dtrace_pid=86555 00:44:03.864 08:40:37 -- host/multipath.sh@66 -- # sleep 6 00:44:10.489 08:40:43 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:44:10.489 08:40:43 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:44:10.489 08:40:43 -- host/multipath.sh@67 -- # active_port=4420 00:44:10.489 08:40:43 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:44:10.489 Attaching 4 probes... 00:44:10.489 @path[10.0.0.2, 4420]: 20640 00:44:10.489 @path[10.0.0.2, 4420]: 21859 00:44:10.489 @path[10.0.0.2, 4420]: 21563 00:44:10.489 @path[10.0.0.2, 4420]: 20907 00:44:10.489 @path[10.0.0.2, 4420]: 21782 00:44:10.489 08:40:43 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:44:10.489 08:40:43 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:44:10.489 08:40:43 -- host/multipath.sh@69 -- # sed -n 1p 00:44:10.489 08:40:43 -- host/multipath.sh@69 -- # port=4420 00:44:10.489 08:40:43 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:44:10.489 08:40:43 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:44:10.489 08:40:43 -- host/multipath.sh@72 -- # kill 86555 00:44:10.489 08:40:43 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:44:10.489 08:40:43 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:44:10.489 08:40:43 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:44:10.489 08:40:43 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:44:10.489 08:40:43 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:44:10.489 08:40:43 -- host/multipath.sh@65 -- # dtrace_pid=86679 00:44:10.489 08:40:43 -- host/multipath.sh@66 -- # sleep 6 00:44:10.489 08:40:43 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86238 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:44:17.053 08:40:49 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:44:17.053 08:40:49 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:44:17.053 08:40:50 -- host/multipath.sh@67 -- # active_port=4421 00:44:17.053 08:40:50 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:44:17.053 Attaching 4 probes... 00:44:17.053 @path[10.0.0.2, 4421]: 15815 00:44:17.053 @path[10.0.0.2, 4421]: 22270 00:44:17.053 @path[10.0.0.2, 4421]: 21433 00:44:17.053 @path[10.0.0.2, 4421]: 21395 00:44:17.053 @path[10.0.0.2, 4421]: 21347 00:44:17.053 08:40:50 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:44:17.053 08:40:50 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:44:17.053 08:40:50 -- host/multipath.sh@69 -- # sed -n 1p 00:44:17.053 08:40:50 -- host/multipath.sh@69 -- # port=4421 00:44:17.053 08:40:50 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:44:17.053 08:40:50 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:44:17.054 08:40:50 -- host/multipath.sh@72 -- # kill 86679 00:44:17.054 08:40:50 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:44:17.054 08:40:50 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:44:17.054 08:40:50 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:44:17.054 08:40:50 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:44:17.314 08:40:50 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:44:17.314 08:40:50 -- host/multipath.sh@65 -- # dtrace_pid=86816 00:44:17.314 08:40:50 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86238 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:44:17.314 08:40:50 -- host/multipath.sh@66 -- # sleep 6 00:44:23.884 08:40:56 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:44:23.884 08:40:56 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:44:23.884 08:40:56 -- host/multipath.sh@67 -- # active_port= 00:44:23.884 08:40:56 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:44:23.884 Attaching 4 probes... 00:44:23.884 00:44:23.884 00:44:23.884 00:44:23.884 00:44:23.884 00:44:23.884 08:40:56 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:44:23.884 08:40:56 -- host/multipath.sh@69 -- # sed -n 1p 00:44:23.884 08:40:56 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:44:23.884 08:40:56 -- host/multipath.sh@69 -- # port= 00:44:23.884 08:40:56 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:44:23.884 08:40:56 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:44:23.884 08:40:56 -- host/multipath.sh@72 -- # kill 86816 00:44:23.884 08:40:56 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:44:23.884 08:40:56 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:44:23.884 08:40:56 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:44:23.884 08:40:56 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:44:23.884 08:40:57 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:44:23.884 08:40:57 -- host/multipath.sh@65 -- # dtrace_pid=86945 00:44:23.884 08:40:57 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86238 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:44:23.884 08:40:57 -- host/multipath.sh@66 -- # sleep 6 00:44:30.448 08:41:03 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:44:30.448 08:41:03 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:44:30.448 08:41:03 -- host/multipath.sh@67 -- # active_port=4421 00:44:30.448 08:41:03 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:44:30.448 Attaching 4 probes... 00:44:30.448 @path[10.0.0.2, 4421]: 19081 00:44:30.448 @path[10.0.0.2, 4421]: 19662 00:44:30.448 @path[10.0.0.2, 4421]: 20035 00:44:30.448 @path[10.0.0.2, 4421]: 19912 00:44:30.448 @path[10.0.0.2, 4421]: 19885 00:44:30.448 08:41:03 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:44:30.448 08:41:03 -- host/multipath.sh@69 -- # sed -n 1p 00:44:30.448 08:41:03 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:44:30.448 08:41:03 -- host/multipath.sh@69 -- # port=4421 00:44:30.448 08:41:03 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:44:30.448 08:41:03 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:44:30.448 08:41:03 -- host/multipath.sh@72 -- # kill 86945 00:44:30.448 08:41:03 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:44:30.448 08:41:03 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:44:30.448 [2024-04-17 08:41:03.600890] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.600953] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.600959] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.600965] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.600970] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.600975] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.600996] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601001] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601006] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601011] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601020] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601024] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601029] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601034] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601044] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601048] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601053] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601062] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601067] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601077] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601082] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601091] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601103] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601109] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601114] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601119] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601123] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601130] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601140] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601146] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601151] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601155] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601161] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601167] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601177] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601182] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601193] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601198] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601203] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601208] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601213] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601219] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601224] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601229] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601249] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601255] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601260] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601264] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601269] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601283] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601293] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 [2024-04-17 08:41:03.601298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ea0b0 is same with the state(5) to be set 00:44:30.448 08:41:03 -- host/multipath.sh@101 -- # sleep 1 00:44:31.384 08:41:04 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:44:31.384 08:41:04 -- host/multipath.sh@65 -- # dtrace_pid=87076 00:44:31.384 08:41:04 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86238 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:44:31.384 08:41:04 -- host/multipath.sh@66 -- # sleep 6 00:44:37.957 08:41:10 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:44:37.957 08:41:10 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:44:37.957 08:41:10 -- host/multipath.sh@67 -- # active_port=4420 00:44:37.957 08:41:10 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:44:37.957 Attaching 4 probes... 00:44:37.957 @path[10.0.0.2, 4420]: 17995 00:44:37.957 @path[10.0.0.2, 4420]: 18705 00:44:37.957 @path[10.0.0.2, 4420]: 17690 00:44:37.957 @path[10.0.0.2, 4420]: 18643 00:44:37.957 @path[10.0.0.2, 4420]: 19006 00:44:37.957 08:41:10 -- host/multipath.sh@69 -- # sed -n 1p 00:44:37.957 08:41:10 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:44:37.957 08:41:10 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:44:37.957 08:41:10 -- host/multipath.sh@69 -- # port=4420 00:44:37.957 08:41:10 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:44:37.957 08:41:10 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:44:37.957 08:41:10 -- host/multipath.sh@72 -- # kill 87076 00:44:37.957 08:41:10 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:44:37.957 08:41:10 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:44:37.957 [2024-04-17 08:41:11.042143] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:44:37.957 08:41:11 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:44:37.957 08:41:11 -- host/multipath.sh@111 -- # sleep 6 00:44:44.522 08:41:17 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:44:44.522 08:41:17 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86238 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:44:44.522 08:41:17 -- host/multipath.sh@65 -- # dtrace_pid=87269 00:44:44.522 08:41:17 -- host/multipath.sh@66 -- # sleep 6 00:44:51.150 08:41:23 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:44:51.150 08:41:23 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:44:51.150 08:41:23 -- host/multipath.sh@67 -- # active_port=4421 00:44:51.150 08:41:23 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:44:51.151 Attaching 4 probes... 00:44:51.151 @path[10.0.0.2, 4421]: 20270 00:44:51.151 @path[10.0.0.2, 4421]: 20555 00:44:51.151 @path[10.0.0.2, 4421]: 20574 00:44:51.151 @path[10.0.0.2, 4421]: 20314 00:44:51.151 @path[10.0.0.2, 4421]: 20162 00:44:51.151 08:41:23 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:44:51.151 08:41:23 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:44:51.151 08:41:23 -- host/multipath.sh@69 -- # sed -n 1p 00:44:51.151 08:41:23 -- host/multipath.sh@69 -- # port=4421 00:44:51.151 08:41:23 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:44:51.151 08:41:23 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:44:51.151 08:41:23 -- host/multipath.sh@72 -- # kill 87269 00:44:51.151 08:41:23 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:44:51.151 08:41:23 -- host/multipath.sh@114 -- # killprocess 86336 00:44:51.151 08:41:23 -- common/autotest_common.sh@926 -- # '[' -z 86336 ']' 00:44:51.151 08:41:23 -- common/autotest_common.sh@930 -- # kill -0 86336 00:44:51.151 08:41:23 -- common/autotest_common.sh@931 -- # uname 00:44:51.151 08:41:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:44:51.151 08:41:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 86336 00:44:51.151 08:41:23 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:44:51.151 08:41:23 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:44:51.151 killing process with pid 86336 00:44:51.151 08:41:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 86336' 00:44:51.151 08:41:23 -- common/autotest_common.sh@945 -- # kill 86336 00:44:51.151 08:41:23 -- common/autotest_common.sh@950 -- # wait 86336 00:44:51.151 Connection closed with partial response: 00:44:51.151 00:44:51.151 00:44:51.151 08:41:23 -- host/multipath.sh@116 -- # wait 86336 00:44:51.151 08:41:23 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:44:51.151 [2024-04-17 08:40:27.202000] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:44:51.151 [2024-04-17 08:40:27.202084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86336 ] 00:44:51.151 [2024-04-17 08:40:27.339370] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:51.151 [2024-04-17 08:40:27.440133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:44:51.151 Running I/O for 90 seconds... 00:44:51.151 [2024-04-17 08:40:37.076661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:27584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.151 [2024-04-17 08:40:37.076718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.076778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:27592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.151 [2024-04-17 08:40:37.076790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.076807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:27600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.151 [2024-04-17 08:40:37.076816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.076832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.151 [2024-04-17 08:40:37.076842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.076857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:27616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.151 [2024-04-17 08:40:37.076866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.076882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.151 [2024-04-17 08:40:37.076891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.076907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:27632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.151 [2024-04-17 08:40:37.076916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.076932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:27640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.151 [2024-04-17 08:40:37.076941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.076956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.151 [2024-04-17 08:40:37.076965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.076981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:27656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.151 [2024-04-17 08:40:37.076990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.077005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.151 [2024-04-17 08:40:37.077035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.077049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:27672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.151 [2024-04-17 08:40:37.077058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.077072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:27680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.151 [2024-04-17 08:40:37.077081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.079609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:27688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.151 [2024-04-17 08:40:37.079639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.079658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:27696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.151 [2024-04-17 08:40:37.079668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.079683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.151 [2024-04-17 08:40:37.079692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.079706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:27712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.151 [2024-04-17 08:40:37.079714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.079728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:27720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.151 [2024-04-17 08:40:37.079737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.079751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:27728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.151 [2024-04-17 08:40:37.079761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.079774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:27072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.151 [2024-04-17 08:40:37.079783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.079797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:27104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.151 [2024-04-17 08:40:37.079806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.079820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:27120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.151 [2024-04-17 08:40:37.079828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.079842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:27136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.151 [2024-04-17 08:40:37.079850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.079874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.151 [2024-04-17 08:40:37.079883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.079897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.151 [2024-04-17 08:40:37.079905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.079919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:27168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.151 [2024-04-17 08:40:37.079928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.079942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:27184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.151 [2024-04-17 08:40:37.079951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.079965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:27736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.151 [2024-04-17 08:40:37.079974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.079988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:27744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.151 [2024-04-17 08:40:37.079996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.080010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:27752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.151 [2024-04-17 08:40:37.080019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.080033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:27760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.151 [2024-04-17 08:40:37.080042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.080056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.151 [2024-04-17 08:40:37.080065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.080079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:27192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.151 [2024-04-17 08:40:37.080088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.080102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.151 [2024-04-17 08:40:37.080110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.080124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.151 [2024-04-17 08:40:37.080133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.080151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.151 [2024-04-17 08:40:37.080160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.080174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:27248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.151 [2024-04-17 08:40:37.080183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.080197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:27264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.151 [2024-04-17 08:40:37.080206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.080219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.151 [2024-04-17 08:40:37.080228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.080242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:27304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.151 [2024-04-17 08:40:37.080251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.080266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:27776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.151 [2024-04-17 08:40:37.080275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.080289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.151 [2024-04-17 08:40:37.080298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.080312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.151 [2024-04-17 08:40:37.080320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.080334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:27800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.151 [2024-04-17 08:40:37.080343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.080357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:27808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.151 [2024-04-17 08:40:37.080366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.081030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:27816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.151 [2024-04-17 08:40:37.081051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.081069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.151 [2024-04-17 08:40:37.081080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.081096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:27832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.151 [2024-04-17 08:40:37.081115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.081131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:27840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.151 [2024-04-17 08:40:37.081141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:44:51.151 [2024-04-17 08:40:37.081157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:27848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.152 [2024-04-17 08:40:37.081167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.081183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:27856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.152 [2024-04-17 08:40:37.081193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.081208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:27864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.152 [2024-04-17 08:40:37.081218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.081234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:27872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.152 [2024-04-17 08:40:37.081244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.081260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.152 [2024-04-17 08:40:37.081270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.081286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:27888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.152 [2024-04-17 08:40:37.081295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.081311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:27312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.152 [2024-04-17 08:40:37.081321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.081337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:27320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.152 [2024-04-17 08:40:37.081347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.081363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.152 [2024-04-17 08:40:37.081373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.081389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:27376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.152 [2024-04-17 08:40:37.081399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.081414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.152 [2024-04-17 08:40:37.081429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.081461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.152 [2024-04-17 08:40:37.081472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.081487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:27424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.152 [2024-04-17 08:40:37.081498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.081514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.152 [2024-04-17 08:40:37.081524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.081540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:27896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.152 [2024-04-17 08:40:37.081550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.081566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:27904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.152 [2024-04-17 08:40:37.081576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.083860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:27912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.152 [2024-04-17 08:40:37.083888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.083908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.152 [2024-04-17 08:40:37.083919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.083935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:27928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.152 [2024-04-17 08:40:37.083945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.083961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.152 [2024-04-17 08:40:37.083971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.083987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:27944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.152 [2024-04-17 08:40:37.083997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.084012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:27952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.152 [2024-04-17 08:40:37.084022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.084038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.152 [2024-04-17 08:40:37.084048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.084073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.152 [2024-04-17 08:40:37.084083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.084098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:27976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.152 [2024-04-17 08:40:37.084108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.084124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:27984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.152 [2024-04-17 08:40:37.084134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.084149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:27992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.152 [2024-04-17 08:40:37.084159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.084175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:28000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.152 [2024-04-17 08:40:37.084185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.084200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:28008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.152 [2024-04-17 08:40:37.084210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.084226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:27440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.152 [2024-04-17 08:40:37.084236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.084251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:27456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.152 [2024-04-17 08:40:37.084261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.084276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:27464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.152 [2024-04-17 08:40:37.084286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.084302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:27472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.152 [2024-04-17 08:40:37.084312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.084512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:27520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.152 [2024-04-17 08:40:37.084527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.084544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.152 [2024-04-17 08:40:37.084554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.084577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:27544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.152 [2024-04-17 08:40:37.084587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.084604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.152 [2024-04-17 08:40:37.084614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.084629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:28016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.152 [2024-04-17 08:40:37.084639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.084655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:28024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.152 [2024-04-17 08:40:37.084665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.084680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:28032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.152 [2024-04-17 08:40:37.084690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.084706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:28040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.152 [2024-04-17 08:40:37.084716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.086013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:28048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.152 [2024-04-17 08:40:37.086039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.086059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.152 [2024-04-17 08:40:37.086070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.086086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:28064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.152 [2024-04-17 08:40:37.086096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:37.086112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:28072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.152 [2024-04-17 08:40:37.086122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:43.556925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.152 [2024-04-17 08:40:43.556988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:43.557037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.152 [2024-04-17 08:40:43.557050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:43.557068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.152 [2024-04-17 08:40:43.557102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:43.557119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:113880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.152 [2024-04-17 08:40:43.557130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:43.557147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.152 [2024-04-17 08:40:43.557157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:43.557174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.152 [2024-04-17 08:40:43.557185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:43.557201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.152 [2024-04-17 08:40:43.557212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:43.557228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.152 [2024-04-17 08:40:43.557238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:43.557379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.152 [2024-04-17 08:40:43.557406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:43.557427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:113408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.152 [2024-04-17 08:40:43.557438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:43.557456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.152 [2024-04-17 08:40:43.557467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:43.557485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:113448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.152 [2024-04-17 08:40:43.557496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:43.557513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:113464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.152 [2024-04-17 08:40:43.557524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:44:51.152 [2024-04-17 08:40:43.557542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:113472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.152 [2024-04-17 08:40:43.557552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.557569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:113480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.153 [2024-04-17 08:40:43.557589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.557607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:113912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.153 [2024-04-17 08:40:43.557618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.557635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:113920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.153 [2024-04-17 08:40:43.557646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.557663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:113928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.153 [2024-04-17 08:40:43.557673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.557691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:113936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.153 [2024-04-17 08:40:43.557701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.557719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:113944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.153 [2024-04-17 08:40:43.557729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.557747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.153 [2024-04-17 08:40:43.557757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.557775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:113960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.153 [2024-04-17 08:40:43.557785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.557803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.153 [2024-04-17 08:40:43.557813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.557830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:113976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.153 [2024-04-17 08:40:43.557841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.557858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:113984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.153 [2024-04-17 08:40:43.557869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.557887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:113992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.153 [2024-04-17 08:40:43.557898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.557916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:114000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.153 [2024-04-17 08:40:43.557931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.557949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:114008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.153 [2024-04-17 08:40:43.557960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.557986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:114016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.153 [2024-04-17 08:40:43.557997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.558015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.153 [2024-04-17 08:40:43.558025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.558580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:114032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.153 [2024-04-17 08:40:43.558599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.558623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:114040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.153 [2024-04-17 08:40:43.558635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.558656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:114048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.153 [2024-04-17 08:40:43.558667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.558688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.153 [2024-04-17 08:40:43.558698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.558720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:114064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.153 [2024-04-17 08:40:43.558730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.558751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:114072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.153 [2024-04-17 08:40:43.558761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.558783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:114080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.153 [2024-04-17 08:40:43.558793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.558814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.153 [2024-04-17 08:40:43.558825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.558846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:114096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.153 [2024-04-17 08:40:43.558856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.558885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:114104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.153 [2024-04-17 08:40:43.558896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.558916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:114112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.153 [2024-04-17 08:40:43.558927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.558948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:114120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.153 [2024-04-17 08:40:43.558959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.558979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:114128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.153 [2024-04-17 08:40:43.558990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.559010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.153 [2024-04-17 08:40:43.559021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.559043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:113536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.153 [2024-04-17 08:40:43.559053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.559074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.153 [2024-04-17 08:40:43.559085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.559106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:113552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.153 [2024-04-17 08:40:43.559117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.559138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:113560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.153 [2024-04-17 08:40:43.559148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.559169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:113584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.153 [2024-04-17 08:40:43.559179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.559200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:113592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.153 [2024-04-17 08:40:43.559210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.559231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.153 [2024-04-17 08:40:43.559242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.559267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:114136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.153 [2024-04-17 08:40:43.559278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.559299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:114144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.153 [2024-04-17 08:40:43.559309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.559330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:114152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.153 [2024-04-17 08:40:43.559340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.559361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:114160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.153 [2024-04-17 08:40:43.559372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.559403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:114168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.153 [2024-04-17 08:40:43.559416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.559437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:114176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.153 [2024-04-17 08:40:43.559447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.559468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.153 [2024-04-17 08:40:43.559479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.559576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:114192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.153 [2024-04-17 08:40:43.559590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.559614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:114200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.153 [2024-04-17 08:40:43.559625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.559648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.153 [2024-04-17 08:40:43.559659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.559682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:114216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.153 [2024-04-17 08:40:43.559692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.559715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:114224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.153 [2024-04-17 08:40:43.559727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.559760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:114232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.153 [2024-04-17 08:40:43.559771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:44:51.153 [2024-04-17 08:40:43.559794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:114240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.153 [2024-04-17 08:40:43.559804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.559828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:114248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:43.559838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.559861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:114256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:43.559872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.559895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:114264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:43.559905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.559928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:114272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:43.559939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.559962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:114280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:43.559973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.560006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:114288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:43.560017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.560038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:43.560048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.560088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:43.560098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.560121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:113648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:43.560132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.560155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:113664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:43.560165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.560189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:113680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:43.560203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.560227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:43.560237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.560260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:113712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:43.560271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.560294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:113736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:43.560304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.560327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:43.560337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.560360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:114304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:43.560371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.560404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:43.560431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.560463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:114320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:43.560475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.560498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.154 [2024-04-17 08:40:43.560509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.560533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:114336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:43.560554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.560575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:114344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.154 [2024-04-17 08:40:43.560585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.560607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:114352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:43.560617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.560638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:114360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:43.560652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.560674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:114368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:43.560684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.560706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:113744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:43.560716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.560737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:113752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:43.560747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.560768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:113768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:43.560778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.560800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:113776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:43.560809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.560830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:113784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:43.560841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.560862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:113808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:43.560872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.560893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:113832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:43.560903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.560925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:43.560934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.560956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:114376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:43.560965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.560987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:114384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:43.560996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.561017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:114392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.154 [2024-04-17 08:40:43.561027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.561053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:114400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:43.561063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.561084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:114408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:43.561094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.561115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:114416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.154 [2024-04-17 08:40:43.561142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.561164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:114424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:43.561175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.561198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:114432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:43.561208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:43.561231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:114440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.154 [2024-04-17 08:40:43.561242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:50.402352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:126456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.154 [2024-04-17 08:40:50.402447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:50.402512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:126464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:50.402523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:50.402539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:126472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:50.402549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:50.402563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:126480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:50.402573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:50.402587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:126488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.154 [2024-04-17 08:40:50.402596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:50.402610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:126496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:50.402619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:50.402652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:126504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:50.402662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:50.402676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:125840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:50.402685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:50.402699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:125848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:50.402709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:50.402723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:125856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:50.402733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:50.402748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:125896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:50.402759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:50.402774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:125912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:50.402784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:50.402798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:125944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:50.402807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:50.402821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:125952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.154 [2024-04-17 08:40:50.402831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:44:51.154 [2024-04-17 08:40:50.402847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:125960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.155 [2024-04-17 08:40:50.402856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.402871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:126512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.155 [2024-04-17 08:40:50.402880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.402894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:126520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.155 [2024-04-17 08:40:50.402903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.402917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:126528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.155 [2024-04-17 08:40:50.402926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.402945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:126536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.155 [2024-04-17 08:40:50.402954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.402969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:126544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.155 [2024-04-17 08:40:50.402977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.402991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:126552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.155 [2024-04-17 08:40:50.402999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.403013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:126560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.155 [2024-04-17 08:40:50.403021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.403035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:126568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.155 [2024-04-17 08:40:50.403044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.403057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:126576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.155 [2024-04-17 08:40:50.403066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.403261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:126584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.155 [2024-04-17 08:40:50.403275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.403293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.155 [2024-04-17 08:40:50.403303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.403320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:126600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.155 [2024-04-17 08:40:50.403330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.403347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:125976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.155 [2024-04-17 08:40:50.403355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.403371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:126000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.155 [2024-04-17 08:40:50.403380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.403396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.155 [2024-04-17 08:40:50.403415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.403432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:126016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.155 [2024-04-17 08:40:50.403447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.403462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:126024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.155 [2024-04-17 08:40:50.403471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.403487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:126056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.155 [2024-04-17 08:40:50.403500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.403517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.155 [2024-04-17 08:40:50.403526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.403542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:126080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.155 [2024-04-17 08:40:50.403551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.403567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.155 [2024-04-17 08:40:50.403576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.403592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.155 [2024-04-17 08:40:50.403601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.403616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:126624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.155 [2024-04-17 08:40:50.403635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.403651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:126632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.155 [2024-04-17 08:40:50.403660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.403675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:126640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.155 [2024-04-17 08:40:50.403683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.403698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:126648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.155 [2024-04-17 08:40:50.403707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.403722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:126656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.155 [2024-04-17 08:40:50.403730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.403745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:126664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.155 [2024-04-17 08:40:50.403864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.403882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:126672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.155 [2024-04-17 08:40:50.403891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.403907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:126680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.155 [2024-04-17 08:40:50.403916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.403931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:126688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.155 [2024-04-17 08:40:50.403940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.403955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:126696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.155 [2024-04-17 08:40:50.403964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.403979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:126704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.155 [2024-04-17 08:40:50.403988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.404003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:126712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.155 [2024-04-17 08:40:50.404012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.404026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.155 [2024-04-17 08:40:50.404035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.404050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.155 [2024-04-17 08:40:50.404058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.404073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.155 [2024-04-17 08:40:50.404082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.404099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.155 [2024-04-17 08:40:50.404108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.404125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:126112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.155 [2024-04-17 08:40:50.404137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.404152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.155 [2024-04-17 08:40:50.404160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.404179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:126136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.155 [2024-04-17 08:40:50.404188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.404204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:126152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.155 [2024-04-17 08:40:50.404214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.404230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:126168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.155 [2024-04-17 08:40:50.404239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.404254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.155 [2024-04-17 08:40:50.404265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.404279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:126184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.155 [2024-04-17 08:40:50.404288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.404302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:126744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.155 [2024-04-17 08:40:50.404312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.404327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:126752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.155 [2024-04-17 08:40:50.404335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.404350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:126760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.155 [2024-04-17 08:40:50.404358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.404373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:126768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.155 [2024-04-17 08:40:50.404382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.404417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:126776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.155 [2024-04-17 08:40:50.404427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.404442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:126784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.155 [2024-04-17 08:40:50.404450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.404465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:126792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.155 [2024-04-17 08:40:50.404473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.404494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:126800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.155 [2024-04-17 08:40:50.404503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.404614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:126808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.155 [2024-04-17 08:40:50.404626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.404644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:126816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.155 [2024-04-17 08:40:50.404655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.404673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:126824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.155 [2024-04-17 08:40:50.404682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.404700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:126832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.155 [2024-04-17 08:40:50.404709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.404727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:126840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.155 [2024-04-17 08:40:50.404735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.404754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:126848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.155 [2024-04-17 08:40:50.404762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.404779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.155 [2024-04-17 08:40:50.404790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:44:51.155 [2024-04-17 08:40:50.404808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:126864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.156 [2024-04-17 08:40:50.404816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.404834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:126872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.156 [2024-04-17 08:40:50.404842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.404861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:126880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.156 [2024-04-17 08:40:50.404869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.404886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:126888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.156 [2024-04-17 08:40:50.404895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.404917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:126896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.156 [2024-04-17 08:40:50.404927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.404945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:126200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.156 [2024-04-17 08:40:50.404954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.404971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:126208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.156 [2024-04-17 08:40:50.404979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.404997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:126224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.156 [2024-04-17 08:40:50.405005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.405023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:126256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.156 [2024-04-17 08:40:50.405031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.405048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:126280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.156 [2024-04-17 08:40:50.405057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.405074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.156 [2024-04-17 08:40:50.405085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.405102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:126344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.156 [2024-04-17 08:40:50.405113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.405130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:126368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.156 [2024-04-17 08:40:50.405139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.405156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:126376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.156 [2024-04-17 08:40:50.405164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.405182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:126384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.156 [2024-04-17 08:40:50.405190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.405207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:126392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.156 [2024-04-17 08:40:50.405218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.405235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:126400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.156 [2024-04-17 08:40:50.405247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.405266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:126416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.156 [2024-04-17 08:40:50.405274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.405292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.156 [2024-04-17 08:40:50.405301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.405318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:126432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.156 [2024-04-17 08:40:50.405327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.405344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:126448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.156 [2024-04-17 08:40:50.405352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.405370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:126904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.156 [2024-04-17 08:40:50.405380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.405407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:126912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.156 [2024-04-17 08:40:50.405416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.405433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:126920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.156 [2024-04-17 08:40:50.405441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.405543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.156 [2024-04-17 08:40:50.405556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.405577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:126936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.156 [2024-04-17 08:40:50.405585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.405605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:126944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.156 [2024-04-17 08:40:50.405615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.405635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:126952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.156 [2024-04-17 08:40:50.405644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.405664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:126960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.156 [2024-04-17 08:40:50.405678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.405698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:126968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.156 [2024-04-17 08:40:50.405707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.405727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:126976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.156 [2024-04-17 08:40:50.405735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.405756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:126984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.156 [2024-04-17 08:40:50.405767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.405786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:126992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.156 [2024-04-17 08:40:50.405795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.405814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:127000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.156 [2024-04-17 08:40:50.405823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.405844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:127008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.156 [2024-04-17 08:40:50.405854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.405874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:127016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.156 [2024-04-17 08:40:50.405882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.405901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:127024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.156 [2024-04-17 08:40:50.405910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.405930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:127032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.156 [2024-04-17 08:40:50.405939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.405958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:127040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.156 [2024-04-17 08:40:50.405967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.405994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.156 [2024-04-17 08:40:50.406020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.406041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:127056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.156 [2024-04-17 08:40:50.406050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.406074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:127064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.156 [2024-04-17 08:40:50.406084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.406104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:127072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.156 [2024-04-17 08:40:50.406113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.406133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:127080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.156 [2024-04-17 08:40:50.406142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.406164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:127088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.156 [2024-04-17 08:40:50.406173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.406193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:127096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.156 [2024-04-17 08:40:50.406202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.406221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:127104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.156 [2024-04-17 08:40:50.406231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.406251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:127112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.156 [2024-04-17 08:40:50.406262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.406283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:127120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.156 [2024-04-17 08:40:50.406292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.406311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:127128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.156 [2024-04-17 08:40:50.406321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.406341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:127136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.156 [2024-04-17 08:40:50.406350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.406370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:127144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.156 [2024-04-17 08:40:50.406379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:44:51.156 [2024-04-17 08:40:50.406400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:127152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.156 [2024-04-17 08:40:50.406420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.601892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:124840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.601949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.601971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:124848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.601981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:124312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.602013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:124320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.602033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:124328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.602054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:124352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.602074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:124424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.602093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:124496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.602114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:124504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.602135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:124520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.602155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:124896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.602174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:124928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.602194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:124936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.602246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:124944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.602267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:124952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.602287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:124968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.602307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:124984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.602328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:124992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.602348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:125008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.602368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:124536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.602388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:124544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.602420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:124568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.602439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:125016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.602458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:125032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.602478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:125048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.602498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:125056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.602524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:125064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.602544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:125072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.602564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.602583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:125104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.157 [2024-04-17 08:41:03.602603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:124616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.602623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:124632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.602642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:124640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.602663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:124664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.602684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:124680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.602704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:124688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.602722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:124696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.602743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:125112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.602767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:125120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.157 [2024-04-17 08:41:03.602786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:125128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.157 [2024-04-17 08:41:03.602806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:125136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.602825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.157 [2024-04-17 08:41:03.602845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.602864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:125160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.157 [2024-04-17 08:41:03.602884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:125168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.157 [2024-04-17 08:41:03.602903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:125176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.602923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:125184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.602942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:125192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.602962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:125200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.157 [2024-04-17 08:41:03.602982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.602993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:125208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.157 [2024-04-17 08:41:03.603002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.603018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:125216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.603027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.603037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:125224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.603046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.603057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:125232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.157 [2024-04-17 08:41:03.603083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.603094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:125240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.603104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.603115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:125248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.157 [2024-04-17 08:41:03.603125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.603136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:125256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.603146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.603157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:125264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.603178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.603188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:125272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.603197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.603208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:125280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.157 [2024-04-17 08:41:03.603226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.603237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.157 [2024-04-17 08:41:03.603246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.603257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:125296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.157 [2024-04-17 08:41:03.603266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.157 [2024-04-17 08:41:03.603277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:125304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.158 [2024-04-17 08:41:03.603285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.603296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:125312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.158 [2024-04-17 08:41:03.603309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.603320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:125320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.158 [2024-04-17 08:41:03.603329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.603339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:125328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.158 [2024-04-17 08:41:03.603348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.603359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:125336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.158 [2024-04-17 08:41:03.603368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.603379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:125344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.158 [2024-04-17 08:41:03.603388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.603398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:125352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.158 [2024-04-17 08:41:03.603407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.603417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:125360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.158 [2024-04-17 08:41:03.603434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.603445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:125368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.158 [2024-04-17 08:41:03.603454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.603465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:125376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.158 [2024-04-17 08:41:03.603474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.603485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:125384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.158 [2024-04-17 08:41:03.603494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.603504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:125392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.158 [2024-04-17 08:41:03.603513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.603524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:125400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.158 [2024-04-17 08:41:03.603533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.603544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:125408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.158 [2024-04-17 08:41:03.603555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.603565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:125416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.158 [2024-04-17 08:41:03.603579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.603589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:125424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.158 [2024-04-17 08:41:03.603598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.603609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:125432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.158 [2024-04-17 08:41:03.603618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.603628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:125440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.158 [2024-04-17 08:41:03.603637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.603648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:125448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.158 [2024-04-17 08:41:03.603657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.603667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:125456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.158 [2024-04-17 08:41:03.603676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.603687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:125464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.158 [2024-04-17 08:41:03.603696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.603706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:125472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.158 [2024-04-17 08:41:03.603715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.603725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:125480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.158 [2024-04-17 08:41:03.603734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.603745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:125488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.158 [2024-04-17 08:41:03.603754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.603764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:125496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.158 [2024-04-17 08:41:03.603773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.603784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:125504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.158 [2024-04-17 08:41:03.603793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.603803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:125512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.158 [2024-04-17 08:41:03.603812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.603826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:125520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.158 [2024-04-17 08:41:03.603835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.603845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:125528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.158 [2024-04-17 08:41:03.603855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.603865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:125536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.158 [2024-04-17 08:41:03.603875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.603886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:125544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.158 [2024-04-17 08:41:03.603895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.603906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:125552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.158 [2024-04-17 08:41:03.603914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.603925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:125560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.158 [2024-04-17 08:41:03.603934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.603945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:125568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.158 [2024-04-17 08:41:03.603954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.603964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:125576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.158 [2024-04-17 08:41:03.603973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.603984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:124704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.158 [2024-04-17 08:41:03.603992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.604003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:124712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.158 [2024-04-17 08:41:03.604012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.604023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:124736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.158 [2024-04-17 08:41:03.604032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.604042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:124744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.158 [2024-04-17 08:41:03.604051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.604062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:124776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.158 [2024-04-17 08:41:03.604075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.604085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:124784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.158 [2024-04-17 08:41:03.604094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.604110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:124792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.158 [2024-04-17 08:41:03.604119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.604130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:124800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.158 [2024-04-17 08:41:03.604139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.604149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:124832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.158 [2024-04-17 08:41:03.604158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.604169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:124856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.158 [2024-04-17 08:41:03.604178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.604188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:124864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.158 [2024-04-17 08:41:03.604198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.604209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:124872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.158 [2024-04-17 08:41:03.604218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.604228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:124880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.158 [2024-04-17 08:41:03.604237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.604248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:124888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.158 [2024-04-17 08:41:03.604257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.604267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:124904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.158 [2024-04-17 08:41:03.604276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.604287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:124912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.158 [2024-04-17 08:41:03.604295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.604306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:125584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.158 [2024-04-17 08:41:03.604314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.604329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:125592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.158 [2024-04-17 08:41:03.604338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.604348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:125600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.158 [2024-04-17 08:41:03.604357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.604368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:125608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.158 [2024-04-17 08:41:03.604377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.604388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:125616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.158 [2024-04-17 08:41:03.604405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.604415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:125624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.158 [2024-04-17 08:41:03.604424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.604437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:125632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.158 [2024-04-17 08:41:03.604446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.604456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:125640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:51.158 [2024-04-17 08:41:03.604464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.604475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:124920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.158 [2024-04-17 08:41:03.604484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.604494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:124960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.158 [2024-04-17 08:41:03.604503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.158 [2024-04-17 08:41:03.604514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:124976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.158 [2024-04-17 08:41:03.604524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.159 [2024-04-17 08:41:03.604535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:125000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.159 [2024-04-17 08:41:03.604543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.159 [2024-04-17 08:41:03.604554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:125024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.159 [2024-04-17 08:41:03.604563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.159 [2024-04-17 08:41:03.604573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:125040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.159 [2024-04-17 08:41:03.604587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.159 [2024-04-17 08:41:03.604598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:125088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:51.159 [2024-04-17 08:41:03.604607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.159 [2024-04-17 08:41:03.604617] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1331bd0 is same with the state(5) to be set 00:44:51.159 [2024-04-17 08:41:03.604628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:51.159 [2024-04-17 08:41:03.604634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:51.159 [2024-04-17 08:41:03.604641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125096 len:8 PRP1 0x0 PRP2 0x0 00:44:51.159 [2024-04-17 08:41:03.604650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:51.159 [2024-04-17 08:41:03.604698] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1331bd0 was disconnected and freed. reset controller. 00:44:51.159 [2024-04-17 08:41:03.605861] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:51.159 [2024-04-17 08:41:03.605936] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14cc6e0 (9): Bad file descriptor 00:44:51.159 [2024-04-17 08:41:03.606048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:51.159 [2024-04-17 08:41:03.606082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:51.159 [2024-04-17 08:41:03.606094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14cc6e0 with addr=10.0.0.2, port=4421 00:44:51.159 [2024-04-17 08:41:03.606104] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cc6e0 is same with the state(5) to be set 00:44:51.159 [2024-04-17 08:41:03.606121] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14cc6e0 (9): Bad file descriptor 00:44:51.159 [2024-04-17 08:41:03.606135] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:51.159 [2024-04-17 08:41:03.606148] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:51.159 [2024-04-17 08:41:03.606159] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:51.159 [2024-04-17 08:41:03.606178] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:51.159 [2024-04-17 08:41:03.606187] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:51.159 [2024-04-17 08:41:13.639160] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:44:51.159 Received shutdown signal, test time was about 54.579558 seconds 00:44:51.159 00:44:51.159 Latency(us) 00:44:51.159 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:51.159 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:44:51.159 Verification LBA range: start 0x0 length 0x4000 00:44:51.159 Nvme0n1 : 54.58 11664.55 45.56 0.00 0.00 10961.19 540.17 7033243.39 00:44:51.159 =================================================================================================================== 00:44:51.159 Total : 11664.55 45.56 0.00 0.00 10961.19 540.17 7033243.39 00:44:51.159 08:41:23 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:51.159 08:41:23 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:44:51.159 08:41:23 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:44:51.159 08:41:23 -- host/multipath.sh@125 -- # nvmftestfini 00:44:51.159 08:41:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:44:51.159 08:41:23 -- nvmf/common.sh@116 -- # sync 00:44:51.159 08:41:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:44:51.159 08:41:23 -- nvmf/common.sh@119 -- # set +e 00:44:51.159 08:41:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:44:51.159 08:41:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:44:51.159 rmmod nvme_tcp 00:44:51.159 rmmod nvme_fabrics 00:44:51.159 rmmod nvme_keyring 00:44:51.159 08:41:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:44:51.159 08:41:24 -- nvmf/common.sh@123 -- # set -e 00:44:51.159 08:41:24 -- nvmf/common.sh@124 -- # return 0 00:44:51.159 08:41:24 -- nvmf/common.sh@477 -- # '[' -n 86238 ']' 00:44:51.159 08:41:24 -- nvmf/common.sh@478 -- # killprocess 86238 00:44:51.159 08:41:24 -- common/autotest_common.sh@926 -- # '[' -z 86238 ']' 00:44:51.159 08:41:24 -- common/autotest_common.sh@930 -- # kill -0 86238 00:44:51.159 08:41:24 -- common/autotest_common.sh@931 -- # uname 00:44:51.159 08:41:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:44:51.159 08:41:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 86238 00:44:51.159 killing process with pid 86238 00:44:51.159 08:41:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:44:51.159 08:41:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:44:51.159 08:41:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 86238' 00:44:51.159 08:41:24 -- common/autotest_common.sh@945 -- # kill 86238 00:44:51.159 08:41:24 -- common/autotest_common.sh@950 -- # wait 86238 00:44:51.159 08:41:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:44:51.159 08:41:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:44:51.159 08:41:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:44:51.159 08:41:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:44:51.159 08:41:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:44:51.159 08:41:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:51.159 08:41:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:44:51.159 08:41:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:51.159 08:41:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:44:51.159 00:44:51.159 real 0m59.932s 00:44:51.159 user 2m51.366s 00:44:51.159 sys 0m11.441s 00:44:51.159 08:41:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:51.159 08:41:24 -- common/autotest_common.sh@10 -- # set +x 00:44:51.159 ************************************ 00:44:51.159 END TEST nvmf_multipath 00:44:51.159 ************************************ 00:44:51.159 08:41:24 -- nvmf/nvmf.sh@116 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:44:51.159 08:41:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:44:51.159 08:41:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:44:51.159 08:41:24 -- common/autotest_common.sh@10 -- # set +x 00:44:51.159 ************************************ 00:44:51.159 START TEST nvmf_timeout 00:44:51.159 ************************************ 00:44:51.159 08:41:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:44:51.421 * Looking for test storage... 00:44:51.421 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:44:51.421 08:41:24 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:44:51.421 08:41:24 -- nvmf/common.sh@7 -- # uname -s 00:44:51.421 08:41:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:51.421 08:41:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:51.421 08:41:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:51.421 08:41:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:51.421 08:41:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:51.421 08:41:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:51.421 08:41:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:51.421 08:41:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:51.421 08:41:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:51.421 08:41:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:51.421 08:41:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:44:51.421 08:41:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:44:51.421 08:41:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:51.421 08:41:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:51.421 08:41:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:44:51.421 08:41:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:44:51.421 08:41:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:51.421 08:41:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:51.421 08:41:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:51.421 08:41:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:51.421 08:41:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:51.421 08:41:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:51.421 08:41:24 -- paths/export.sh@5 -- # export PATH 00:44:51.421 08:41:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:51.422 08:41:24 -- nvmf/common.sh@46 -- # : 0 00:44:51.422 08:41:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:44:51.422 08:41:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:44:51.422 08:41:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:44:51.422 08:41:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:51.422 08:41:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:51.422 08:41:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:44:51.422 08:41:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:44:51.422 08:41:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:44:51.422 08:41:24 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:44:51.422 08:41:24 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:44:51.422 08:41:24 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:44:51.422 08:41:24 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:44:51.422 08:41:24 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:44:51.422 08:41:24 -- host/timeout.sh@19 -- # nvmftestinit 00:44:51.422 08:41:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:44:51.422 08:41:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:51.422 08:41:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:44:51.422 08:41:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:44:51.422 08:41:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:44:51.422 08:41:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:51.422 08:41:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:44:51.422 08:41:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:51.422 08:41:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:44:51.422 08:41:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:44:51.422 08:41:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:44:51.422 08:41:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:44:51.422 08:41:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:44:51.422 08:41:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:44:51.422 08:41:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:51.422 08:41:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:51.422 08:41:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:44:51.422 08:41:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:44:51.422 08:41:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:44:51.422 08:41:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:44:51.422 08:41:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:44:51.422 08:41:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:51.422 08:41:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:44:51.422 08:41:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:44:51.422 08:41:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:44:51.422 08:41:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:44:51.422 08:41:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:44:51.422 08:41:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:44:51.422 Cannot find device "nvmf_tgt_br" 00:44:51.422 08:41:24 -- nvmf/common.sh@154 -- # true 00:44:51.422 08:41:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:44:51.422 Cannot find device "nvmf_tgt_br2" 00:44:51.422 08:41:24 -- nvmf/common.sh@155 -- # true 00:44:51.422 08:41:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:44:51.422 08:41:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:44:51.422 Cannot find device "nvmf_tgt_br" 00:44:51.422 08:41:24 -- nvmf/common.sh@157 -- # true 00:44:51.422 08:41:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:44:51.422 Cannot find device "nvmf_tgt_br2" 00:44:51.422 08:41:24 -- nvmf/common.sh@158 -- # true 00:44:51.422 08:41:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:44:51.422 08:41:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:44:51.422 08:41:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:44:51.422 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:44:51.422 08:41:24 -- nvmf/common.sh@161 -- # true 00:44:51.422 08:41:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:44:51.422 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:44:51.422 08:41:24 -- nvmf/common.sh@162 -- # true 00:44:51.422 08:41:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:44:51.422 08:41:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:44:51.422 08:41:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:44:51.422 08:41:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:44:51.422 08:41:24 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:44:51.422 08:41:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:44:51.422 08:41:24 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:44:51.422 08:41:24 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:44:51.422 08:41:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:44:51.682 08:41:24 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:44:51.682 08:41:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:44:51.682 08:41:24 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:44:51.682 08:41:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:44:51.682 08:41:24 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:44:51.682 08:41:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:44:51.682 08:41:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:44:51.682 08:41:24 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:44:51.682 08:41:24 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:44:51.682 08:41:24 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:44:51.682 08:41:24 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:44:51.682 08:41:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:44:51.682 08:41:24 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:44:51.682 08:41:24 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:44:51.682 08:41:24 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:44:51.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:51.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:44:51.682 00:44:51.682 --- 10.0.0.2 ping statistics --- 00:44:51.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:51.682 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:44:51.682 08:41:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:44:51.682 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:44:51.682 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:44:51.682 00:44:51.682 --- 10.0.0.3 ping statistics --- 00:44:51.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:51.682 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:44:51.682 08:41:24 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:44:51.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:51.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:44:51.682 00:44:51.682 --- 10.0.0.1 ping statistics --- 00:44:51.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:51.682 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:44:51.682 08:41:24 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:51.682 08:41:24 -- nvmf/common.sh@421 -- # return 0 00:44:51.682 08:41:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:44:51.682 08:41:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:51.682 08:41:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:44:51.682 08:41:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:44:51.682 08:41:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:51.682 08:41:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:44:51.682 08:41:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:44:51.682 08:41:24 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:44:51.682 08:41:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:44:51.682 08:41:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:44:51.682 08:41:24 -- common/autotest_common.sh@10 -- # set +x 00:44:51.682 08:41:24 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:44:51.682 08:41:24 -- nvmf/common.sh@469 -- # nvmfpid=87590 00:44:51.682 08:41:24 -- nvmf/common.sh@470 -- # waitforlisten 87590 00:44:51.682 08:41:24 -- common/autotest_common.sh@819 -- # '[' -z 87590 ']' 00:44:51.682 08:41:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:51.682 08:41:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:44:51.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:51.682 08:41:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:51.682 08:41:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:44:51.682 08:41:24 -- common/autotest_common.sh@10 -- # set +x 00:44:51.682 [2024-04-17 08:41:24.908059] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:44:51.682 [2024-04-17 08:41:24.908128] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:51.941 [2024-04-17 08:41:25.046621] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:51.941 [2024-04-17 08:41:25.151406] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:44:51.941 [2024-04-17 08:41:25.151532] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:51.941 [2024-04-17 08:41:25.151540] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:51.941 [2024-04-17 08:41:25.151546] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:51.941 [2024-04-17 08:41:25.151666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:44:51.941 [2024-04-17 08:41:25.151890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:52.508 08:41:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:44:52.508 08:41:25 -- common/autotest_common.sh@852 -- # return 0 00:44:52.508 08:41:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:44:52.508 08:41:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:44:52.508 08:41:25 -- common/autotest_common.sh@10 -- # set +x 00:44:52.766 08:41:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:52.767 08:41:25 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:44:52.767 08:41:25 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:44:52.767 [2024-04-17 08:41:26.081642] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:53.025 08:41:26 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:44:53.025 Malloc0 00:44:53.284 08:41:26 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:44:53.284 08:41:26 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:44:53.542 08:41:26 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:53.800 [2024-04-17 08:41:26.969971] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:53.800 08:41:26 -- host/timeout.sh@32 -- # bdevperf_pid=87681 00:44:53.800 08:41:26 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:44:53.800 08:41:26 -- host/timeout.sh@34 -- # waitforlisten 87681 /var/tmp/bdevperf.sock 00:44:53.800 08:41:26 -- common/autotest_common.sh@819 -- # '[' -z 87681 ']' 00:44:53.800 08:41:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:44:53.800 08:41:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:44:53.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:44:53.800 08:41:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:44:53.800 08:41:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:44:53.800 08:41:26 -- common/autotest_common.sh@10 -- # set +x 00:44:53.800 [2024-04-17 08:41:27.032494] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:44:53.800 [2024-04-17 08:41:27.032560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87681 ] 00:44:54.059 [2024-04-17 08:41:27.156912] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:54.059 [2024-04-17 08:41:27.275038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:44:54.625 08:41:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:44:54.625 08:41:27 -- common/autotest_common.sh@852 -- # return 0 00:44:54.625 08:41:27 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:44:54.927 08:41:28 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:44:55.196 NVMe0n1 00:44:55.196 08:41:28 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:44:55.196 08:41:28 -- host/timeout.sh@51 -- # rpc_pid=87729 00:44:55.196 08:41:28 -- host/timeout.sh@53 -- # sleep 1 00:44:55.453 Running I/O for 10 seconds... 00:44:56.394 08:41:29 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:56.394 [2024-04-17 08:41:29.676141] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81e20 is same with the state(5) to be set 00:44:56.394 [2024-04-17 08:41:29.676199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81e20 is same with the state(5) to be set 00:44:56.394 [2024-04-17 08:41:29.676209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81e20 is same with the state(5) to be set 00:44:56.394 [2024-04-17 08:41:29.676217] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81e20 is same with the state(5) to be set 00:44:56.394 [2024-04-17 08:41:29.676224] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81e20 is same with the state(5) to be set 00:44:56.394 [2024-04-17 08:41:29.676230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81e20 is same with the state(5) to be set 00:44:56.394 [2024-04-17 08:41:29.676238] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81e20 is same with the state(5) to be set 00:44:56.394 [2024-04-17 08:41:29.676244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81e20 is same with the state(5) to be set 00:44:56.394 [2024-04-17 08:41:29.676251] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81e20 is same with the state(5) to be set 00:44:56.394 [2024-04-17 08:41:29.676257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81e20 is same with the state(5) to be set 00:44:56.394 [2024-04-17 08:41:29.676264] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81e20 is same with the state(5) to be set 00:44:56.394 [2024-04-17 08:41:29.676270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81e20 is same with the state(5) to be set 00:44:56.394 [2024-04-17 08:41:29.676277] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81e20 is same with the state(5) to be set 00:44:56.394 [2024-04-17 08:41:29.676283] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81e20 is same with the state(5) to be set 00:44:56.394 [2024-04-17 08:41:29.676289] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81e20 is same with the state(5) to be set 00:44:56.394 [2024-04-17 08:41:29.676296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81e20 is same with the state(5) to be set 00:44:56.394 [2024-04-17 08:41:29.676302] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81e20 is same with the state(5) to be set 00:44:56.394 [2024-04-17 08:41:29.676309] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81e20 is same with the state(5) to be set 00:44:56.394 [2024-04-17 08:41:29.676316] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81e20 is same with the state(5) to be set 00:44:56.394 [2024-04-17 08:41:29.676322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81e20 is same with the state(5) to be set 00:44:56.394 [2024-04-17 08:41:29.676329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81e20 is same with the state(5) to be set 00:44:56.394 [2024-04-17 08:41:29.676336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81e20 is same with the state(5) to be set 00:44:56.394 [2024-04-17 08:41:29.676343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81e20 is same with the state(5) to be set 00:44:56.394 [2024-04-17 08:41:29.676350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81e20 is same with the state(5) to be set 00:44:56.394 [2024-04-17 08:41:29.676356] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81e20 is same with the state(5) to be set 00:44:56.394 [2024-04-17 08:41:29.676363] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81e20 is same with the state(5) to be set 00:44:56.394 [2024-04-17 08:41:29.676370] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81e20 is same with the state(5) to be set 00:44:56.394 [2024-04-17 08:41:29.676376] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81e20 is same with the state(5) to be set 00:44:56.394 [2024-04-17 08:41:29.676383] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81e20 is same with the state(5) to be set 00:44:56.394 [2024-04-17 08:41:29.676390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81e20 is same with the state(5) to be set 00:44:56.394 [2024-04-17 08:41:29.676411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81e20 is same with the state(5) to be set 00:44:56.394 [2024-04-17 08:41:29.676418] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81e20 is same with the state(5) to be set 00:44:56.394 [2024-04-17 08:41:29.676425] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81e20 is same with the state(5) to be set 00:44:56.394 [2024-04-17 08:41:29.676431] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81e20 is same with the state(5) to be set 00:44:56.394 [2024-04-17 08:41:29.676511] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81e20 is same with the state(5) to be set 00:44:56.394 [2024-04-17 08:41:29.676521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81e20 is same with the state(5) to be set 00:44:56.395 [2024-04-17 08:41:29.676528] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81e20 is same with the state(5) to be set 00:44:56.395 [2024-04-17 08:41:29.676535] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81e20 is same with the state(5) to be set 00:44:56.395 [2024-04-17 08:41:29.676975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:117928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.395 [2024-04-17 08:41:29.677013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.395 [2024-04-17 08:41:29.677033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.395 [2024-04-17 08:41:29.677043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.395 [2024-04-17 08:41:29.677066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:117992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.395 [2024-04-17 08:41:29.677082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.395 [2024-04-17 08:41:29.677098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:118000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.395 [2024-04-17 08:41:29.677105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.395 [2024-04-17 08:41:29.677114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:118008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.395 [2024-04-17 08:41:29.677129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.395 [2024-04-17 08:41:29.677138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:118024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.395 [2024-04-17 08:41:29.677160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.395 [2024-04-17 08:41:29.677171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:118048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.395 [2024-04-17 08:41:29.677178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.395 [2024-04-17 08:41:29.677189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:118056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.395 [2024-04-17 08:41:29.677196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.395 [2024-04-17 08:41:29.677209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:118064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.395 [2024-04-17 08:41:29.677217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.395 [2024-04-17 08:41:29.677225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:118080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.395 [2024-04-17 08:41:29.677232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.395 [2024-04-17 08:41:29.677244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:118096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.395 [2024-04-17 08:41:29.677251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.395 [2024-04-17 08:41:29.677271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:118104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.395 [2024-04-17 08:41:29.677281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.395 [2024-04-17 08:41:29.677299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:118112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.395 [2024-04-17 08:41:29.677318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.395 [2024-04-17 08:41:29.677330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.395 [2024-04-17 08:41:29.677340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.395 [2024-04-17 08:41:29.677348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.395 [2024-04-17 08:41:29.677356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.395 [2024-04-17 08:41:29.677365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.395 [2024-04-17 08:41:29.677385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.395 [2024-04-17 08:41:29.677406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.395 [2024-04-17 08:41:29.677424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.395 [2024-04-17 08:41:29.677435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.395 [2024-04-17 08:41:29.677442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.395 [2024-04-17 08:41:29.677451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.395 [2024-04-17 08:41:29.677460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.395 [2024-04-17 08:41:29.677472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.395 [2024-04-17 08:41:29.677480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.395 [2024-04-17 08:41:29.677491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.395 [2024-04-17 08:41:29.677508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.395 [2024-04-17 08:41:29.677521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:118120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.395 [2024-04-17 08:41:29.677530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.395 [2024-04-17 08:41:29.677539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:118160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.395 [2024-04-17 08:41:29.677548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.395 [2024-04-17 08:41:29.677557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:56.395 [2024-04-17 08:41:29.677574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.395 [2024-04-17 08:41:29.677583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:118176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.395 [2024-04-17 08:41:29.677589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.395 [2024-04-17 08:41:29.677608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:56.395 [2024-04-17 08:41:29.677615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.395 [2024-04-17 08:41:29.677634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:56.395 [2024-04-17 08:41:29.677644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.395 [2024-04-17 08:41:29.677661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:56.395 [2024-04-17 08:41:29.677679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.395 [2024-04-17 08:41:29.677688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:118208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.395 [2024-04-17 08:41:29.677695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.395 [2024-04-17 08:41:29.677713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:56.395 [2024-04-17 08:41:29.677720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.395 [2024-04-17 08:41:29.677736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:118224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.395 [2024-04-17 08:41:29.677751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.395 [2024-04-17 08:41:29.677760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:118232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.395 [2024-04-17 08:41:29.677767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.395 [2024-04-17 08:41:29.677785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:56.395 [2024-04-17 08:41:29.677792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.395 [2024-04-17 08:41:29.677811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:118248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:56.395 [2024-04-17 08:41:29.677818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.395 [2024-04-17 08:41:29.677826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:56.395 [2024-04-17 08:41:29.677833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.395 [2024-04-17 08:41:29.677845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:118264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:56.395 [2024-04-17 08:41:29.677851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.395 [2024-04-17 08:41:29.677860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:56.395 [2024-04-17 08:41:29.677881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.395 [2024-04-17 08:41:29.677892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:118280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.395 [2024-04-17 08:41:29.677902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.395 [2024-04-17 08:41:29.677910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:118288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:56.395 [2024-04-17 08:41:29.677917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.396 [2024-04-17 08:41:29.677928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:117576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.396 [2024-04-17 08:41:29.677935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.396 [2024-04-17 08:41:29.677955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:117592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.396 [2024-04-17 08:41:29.677963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.396 [2024-04-17 08:41:29.677976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:117616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.396 [2024-04-17 08:41:29.677983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.396 [2024-04-17 08:41:29.677991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:117632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.396 [2024-04-17 08:41:29.678013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.396 [2024-04-17 08:41:29.678022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:117648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.396 [2024-04-17 08:41:29.678029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.396 [2024-04-17 08:41:29.678037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.396 [2024-04-17 08:41:29.678044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.396 [2024-04-17 08:41:29.678063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:117688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.396 [2024-04-17 08:41:29.678071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.396 [2024-04-17 08:41:29.678081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:117712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.396 [2024-04-17 08:41:29.678089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.396 [2024-04-17 08:41:29.678107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:118296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.396 [2024-04-17 08:41:29.678114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.396 [2024-04-17 08:41:29.678123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:118304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:56.396 [2024-04-17 08:41:29.678140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.396 [2024-04-17 08:41:29.678158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:118312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:56.396 [2024-04-17 08:41:29.678167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.396 [2024-04-17 08:41:29.678176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:118320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.396 [2024-04-17 08:41:29.678182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.396 [2024-04-17 08:41:29.678191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:118328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:56.396 [2024-04-17 08:41:29.678203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.396 [2024-04-17 08:41:29.678215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.396 [2024-04-17 08:41:29.678223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.396 [2024-04-17 08:41:29.678244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:118344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:56.396 [2024-04-17 08:41:29.678253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.396 [2024-04-17 08:41:29.678278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:118352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.396 [2024-04-17 08:41:29.678295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.396 [2024-04-17 08:41:29.678306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:118360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.396 [2024-04-17 08:41:29.678313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.396 [2024-04-17 08:41:29.678322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:118368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:56.396 [2024-04-17 08:41:29.678339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.396 [2024-04-17 08:41:29.678348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:118376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:56.396 [2024-04-17 08:41:29.678354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.396 [2024-04-17 08:41:29.678377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:118384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:56.396 [2024-04-17 08:41:29.678389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.396 [2024-04-17 08:41:29.678409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:118392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.396 [2024-04-17 08:41:29.678418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.396 [2024-04-17 08:41:29.678429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:118400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:56.396 [2024-04-17 08:41:29.678436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.396 [2024-04-17 08:41:29.678445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:117736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.396 [2024-04-17 08:41:29.678453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.396 [2024-04-17 08:41:29.678469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:117760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.396 [2024-04-17 08:41:29.678480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.396 [2024-04-17 08:41:29.678492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:117768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.396 [2024-04-17 08:41:29.678510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.396 [2024-04-17 08:41:29.678519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.396 [2024-04-17 08:41:29.678526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.396 [2024-04-17 08:41:29.678544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:117816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.396 [2024-04-17 08:41:29.678552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.396 [2024-04-17 08:41:29.678567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:117824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.396 [2024-04-17 08:41:29.678577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.396 [2024-04-17 08:41:29.678588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:117832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.396 [2024-04-17 08:41:29.678595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.396 [2024-04-17 08:41:29.678606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:117848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.396 [2024-04-17 08:41:29.678615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.396 [2024-04-17 08:41:29.678624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:118408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:56.396 [2024-04-17 08:41:29.678631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.396 [2024-04-17 08:41:29.678658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:118416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:56.396 [2024-04-17 08:41:29.678668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.396 [2024-04-17 08:41:29.678687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:118424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:56.396 [2024-04-17 08:41:29.678694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.396 [2024-04-17 08:41:29.678703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:118432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:56.396 [2024-04-17 08:41:29.678721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.396 [2024-04-17 08:41:29.678732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:118440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:56.396 [2024-04-17 08:41:29.678748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.396 [2024-04-17 08:41:29.678764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:118448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:56.396 [2024-04-17 08:41:29.678773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.396 [2024-04-17 08:41:29.678783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:118456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:56.396 [2024-04-17 08:41:29.678790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.396 [2024-04-17 08:41:29.678799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:118464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.396 [2024-04-17 08:41:29.678808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.396 [2024-04-17 08:41:29.678816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:118472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.396 [2024-04-17 08:41:29.678823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.396 [2024-04-17 08:41:29.678834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:118480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.396 [2024-04-17 08:41:29.678842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.397 [2024-04-17 08:41:29.678850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:118488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:56.397 [2024-04-17 08:41:29.678859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.397 [2024-04-17 08:41:29.678868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:118496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:56.397 [2024-04-17 08:41:29.678875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.397 [2024-04-17 08:41:29.678886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:117856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.397 [2024-04-17 08:41:29.678897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.397 [2024-04-17 08:41:29.678908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:117864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.397 [2024-04-17 08:41:29.678916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.397 [2024-04-17 08:41:29.678924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.397 [2024-04-17 08:41:29.678933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.397 [2024-04-17 08:41:29.678942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:117880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.397 [2024-04-17 08:41:29.678948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.397 [2024-04-17 08:41:29.678959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:117888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.397 [2024-04-17 08:41:29.678966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.397 [2024-04-17 08:41:29.678974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:117896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.397 [2024-04-17 08:41:29.678983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.397 [2024-04-17 08:41:29.678992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:117904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.397 [2024-04-17 08:41:29.678998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.397 [2024-04-17 08:41:29.679007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:117912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.397 [2024-04-17 08:41:29.679013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.397 [2024-04-17 08:41:29.679034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:118504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:56.397 [2024-04-17 08:41:29.679041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.397 [2024-04-17 08:41:29.679059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:118512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.397 [2024-04-17 08:41:29.679066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.397 [2024-04-17 08:41:29.679075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:56.397 [2024-04-17 08:41:29.679084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.397 [2024-04-17 08:41:29.679093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:118528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:56.397 [2024-04-17 08:41:29.679099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.397 [2024-04-17 08:41:29.679110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:118536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.397 [2024-04-17 08:41:29.679117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.397 [2024-04-17 08:41:29.679126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:118544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.397 [2024-04-17 08:41:29.679142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.397 [2024-04-17 08:41:29.679150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:118552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:56.397 [2024-04-17 08:41:29.679159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.397 [2024-04-17 08:41:29.679168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:118560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.397 [2024-04-17 08:41:29.679174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.397 [2024-04-17 08:41:29.679185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:118568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:56.397 [2024-04-17 08:41:29.679191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.397 [2024-04-17 08:41:29.679209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:118576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.397 [2024-04-17 08:41:29.679216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.397 [2024-04-17 08:41:29.679224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:118584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:56.397 [2024-04-17 08:41:29.679238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.397 [2024-04-17 08:41:29.679256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:118592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.397 [2024-04-17 08:41:29.679263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.397 [2024-04-17 08:41:29.679274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:118600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.397 [2024-04-17 08:41:29.679281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.397 [2024-04-17 08:41:29.679291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:118608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.397 [2024-04-17 08:41:29.679298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.397 [2024-04-17 08:41:29.679307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:118616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.397 [2024-04-17 08:41:29.679313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.397 [2024-04-17 08:41:29.679328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:117920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.397 [2024-04-17 08:41:29.679337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.397 [2024-04-17 08:41:29.679355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:117936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.397 [2024-04-17 08:41:29.679372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.397 [2024-04-17 08:41:29.679390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:117944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.397 [2024-04-17 08:41:29.679414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.397 [2024-04-17 08:41:29.679423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:117960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.397 [2024-04-17 08:41:29.679433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.397 [2024-04-17 08:41:29.679441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:117968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.397 [2024-04-17 08:41:29.679447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.397 [2024-04-17 08:41:29.679458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:117976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.397 [2024-04-17 08:41:29.679465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.397 [2024-04-17 08:41:29.679484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:117984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.397 [2024-04-17 08:41:29.679491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.397 [2024-04-17 08:41:29.679509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:118016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.397 [2024-04-17 08:41:29.679526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.397 [2024-04-17 08:41:29.679535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:118624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:56.397 [2024-04-17 08:41:29.679541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.397 [2024-04-17 08:41:29.679552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:118632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:56.397 [2024-04-17 08:41:29.679559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.397 [2024-04-17 08:41:29.679575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:118640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:56.397 [2024-04-17 08:41:29.679584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.397 [2024-04-17 08:41:29.679595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:118648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:56.397 [2024-04-17 08:41:29.679602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.397 [2024-04-17 08:41:29.679610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:118656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.397 [2024-04-17 08:41:29.679627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.397 [2024-04-17 08:41:29.679637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:118664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.397 [2024-04-17 08:41:29.679644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.397 [2024-04-17 08:41:29.679653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:118672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:56.397 [2024-04-17 08:41:29.679672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.398 [2024-04-17 08:41:29.679683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:118680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:56.398 [2024-04-17 08:41:29.679698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.398 [2024-04-17 08:41:29.679711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:118032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.398 [2024-04-17 08:41:29.679718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.398 [2024-04-17 08:41:29.679726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:118040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.398 [2024-04-17 08:41:29.679740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.398 [2024-04-17 08:41:29.679754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:118072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.398 [2024-04-17 08:41:29.679761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.398 [2024-04-17 08:41:29.679770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:118088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.398 [2024-04-17 08:41:29.679777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.398 [2024-04-17 08:41:29.679796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:118128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.398 [2024-04-17 08:41:29.679807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.398 [2024-04-17 08:41:29.679818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:118136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.398 [2024-04-17 08:41:29.679825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.398 [2024-04-17 08:41:29.679834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:118144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:56.398 [2024-04-17 08:41:29.679840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.398 [2024-04-17 08:41:29.679848] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6f420 is same with the state(5) to be set 00:44:56.398 [2024-04-17 08:41:29.679859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:56.398 [2024-04-17 08:41:29.679864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:56.398 [2024-04-17 08:41:29.679871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118152 len:8 PRP1 0x0 PRP2 0x0 00:44:56.398 [2024-04-17 08:41:29.679877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:56.398 [2024-04-17 08:41:29.679934] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b6f420 was disconnected and freed. reset controller. 00:44:56.398 [2024-04-17 08:41:29.680195] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:56.398 [2024-04-17 08:41:29.680273] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b28170 (9): Bad file descriptor 00:44:56.398 [2024-04-17 08:41:29.685045] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b28170 (9): Bad file descriptor 00:44:56.398 [2024-04-17 08:41:29.685081] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:56.398 [2024-04-17 08:41:29.685089] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:56.398 [2024-04-17 08:41:29.685097] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:56.398 [2024-04-17 08:41:29.685115] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:56.398 [2024-04-17 08:41:29.685123] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:56.398 08:41:29 -- host/timeout.sh@56 -- # sleep 2 00:44:58.932 [2024-04-17 08:41:31.681439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:58.932 [2024-04-17 08:41:31.681535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:58.932 [2024-04-17 08:41:31.681550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b28170 with addr=10.0.0.2, port=4420 00:44:58.932 [2024-04-17 08:41:31.681563] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28170 is same with the state(5) to be set 00:44:58.932 [2024-04-17 08:41:31.681590] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b28170 (9): Bad file descriptor 00:44:58.932 [2024-04-17 08:41:31.681609] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:58.932 [2024-04-17 08:41:31.681617] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:44:58.932 [2024-04-17 08:41:31.681627] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:58.932 [2024-04-17 08:41:31.681659] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:58.932 [2024-04-17 08:41:31.681670] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:58.932 08:41:31 -- host/timeout.sh@57 -- # get_controller 00:44:58.932 08:41:31 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:44:58.932 08:41:31 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:44:58.932 08:41:31 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:44:58.932 08:41:31 -- host/timeout.sh@58 -- # get_bdev 00:44:58.932 08:41:31 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:44:58.932 08:41:31 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:44:58.932 08:41:32 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:44:58.932 08:41:32 -- host/timeout.sh@61 -- # sleep 5 00:45:00.838 [2024-04-17 08:41:33.677945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:45:00.838 [2024-04-17 08:41:33.678033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:45:00.838 [2024-04-17 08:41:33.678046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b28170 with addr=10.0.0.2, port=4420 00:45:00.838 [2024-04-17 08:41:33.678057] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28170 is same with the state(5) to be set 00:45:00.838 [2024-04-17 08:41:33.678079] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b28170 (9): Bad file descriptor 00:45:00.838 [2024-04-17 08:41:33.678093] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:45:00.838 [2024-04-17 08:41:33.678100] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:45:00.838 [2024-04-17 08:41:33.678108] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:45:00.838 [2024-04-17 08:41:33.678131] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:45:00.838 [2024-04-17 08:41:33.678139] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:45:02.746 [2024-04-17 08:41:35.674336] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:45:03.684 00:45:03.684 Latency(us) 00:45:03.684 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:03.684 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:45:03.684 Verification LBA range: start 0x0 length 0x4000 00:45:03.684 NVMe0n1 : 8.13 1809.32 7.07 15.75 0.00 70195.08 2475.49 7033243.39 00:45:03.684 =================================================================================================================== 00:45:03.684 Total : 1809.32 7.07 15.75 0.00 70195.08 2475.49 7033243.39 00:45:03.684 0 00:45:03.943 08:41:37 -- host/timeout.sh@62 -- # get_controller 00:45:03.943 08:41:37 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:45:03.943 08:41:37 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:45:04.202 08:41:37 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:45:04.202 08:41:37 -- host/timeout.sh@63 -- # get_bdev 00:45:04.202 08:41:37 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:45:04.202 08:41:37 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:45:04.494 08:41:37 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:45:04.494 08:41:37 -- host/timeout.sh@65 -- # wait 87729 00:45:04.494 08:41:37 -- host/timeout.sh@67 -- # killprocess 87681 00:45:04.494 08:41:37 -- common/autotest_common.sh@926 -- # '[' -z 87681 ']' 00:45:04.494 08:41:37 -- common/autotest_common.sh@930 -- # kill -0 87681 00:45:04.494 08:41:37 -- common/autotest_common.sh@931 -- # uname 00:45:04.494 08:41:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:45:04.494 08:41:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87681 00:45:04.494 08:41:37 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:45:04.494 08:41:37 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:45:04.494 killing process with pid 87681 00:45:04.494 08:41:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87681' 00:45:04.494 08:41:37 -- common/autotest_common.sh@945 -- # kill 87681 00:45:04.494 Received shutdown signal, test time was about 9.224432 seconds 00:45:04.494 00:45:04.494 Latency(us) 00:45:04.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:04.494 =================================================================================================================== 00:45:04.494 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:04.494 08:41:37 -- common/autotest_common.sh@950 -- # wait 87681 00:45:04.767 08:41:37 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:05.026 [2024-04-17 08:41:38.208338] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:05.026 08:41:38 -- host/timeout.sh@74 -- # bdevperf_pid=87885 00:45:05.026 08:41:38 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:45:05.026 08:41:38 -- host/timeout.sh@76 -- # waitforlisten 87885 /var/tmp/bdevperf.sock 00:45:05.026 08:41:38 -- common/autotest_common.sh@819 -- # '[' -z 87885 ']' 00:45:05.026 08:41:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:45:05.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:45:05.026 08:41:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:45:05.026 08:41:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:45:05.026 08:41:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:45:05.026 08:41:38 -- common/autotest_common.sh@10 -- # set +x 00:45:05.026 [2024-04-17 08:41:38.292619] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:45:05.026 [2024-04-17 08:41:38.292706] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87885 ] 00:45:05.285 [2024-04-17 08:41:38.433430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:05.285 [2024-04-17 08:41:38.590997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:45:06.218 08:41:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:45:06.218 08:41:39 -- common/autotest_common.sh@852 -- # return 0 00:45:06.218 08:41:39 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:45:06.476 08:41:39 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:45:06.733 NVMe0n1 00:45:06.733 08:41:40 -- host/timeout.sh@84 -- # rpc_pid=87934 00:45:06.733 08:41:40 -- host/timeout.sh@86 -- # sleep 1 00:45:06.733 08:41:40 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:45:07.000 Running I/O for 10 seconds... 00:45:07.938 08:41:41 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:08.199 [2024-04-17 08:41:41.275091] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.199 [2024-04-17 08:41:41.275698] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.199 [2024-04-17 08:41:41.275730] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.199 [2024-04-17 08:41:41.275738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.199 [2024-04-17 08:41:41.275744] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.199 [2024-04-17 08:41:41.275753] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.199 [2024-04-17 08:41:41.275760] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.199 [2024-04-17 08:41:41.275767] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.199 [2024-04-17 08:41:41.275774] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.199 [2024-04-17 08:41:41.275780] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.199 [2024-04-17 08:41:41.275787] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.275792] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.275798] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.275803] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.275809] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.275815] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.275821] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.275827] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.275833] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.275839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.275844] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.275851] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.275857] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.275862] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.275868] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.275874] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.275880] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.275885] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.275891] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.275898] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.275903] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.275910] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.275916] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.275922] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.275929] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.275934] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.275940] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.275946] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.275952] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.275957] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.275965] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.275971] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.275978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.275983] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.275988] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.275994] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.276000] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.276006] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.276012] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.276018] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.276025] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.276030] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.276036] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.276042] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.276048] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.276054] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.276060] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.276066] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.276072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.276078] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.276083] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.276090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172600 is same with the state(5) to be set 00:45:08.200 [2024-04-17 08:41:41.276339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:120280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.200 [2024-04-17 08:41:41.276389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.200 [2024-04-17 08:41:41.276425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.200 [2024-04-17 08:41:41.276434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.200 [2024-04-17 08:41:41.276444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:120304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.200 [2024-04-17 08:41:41.276451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.200 [2024-04-17 08:41:41.276463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:120320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.200 [2024-04-17 08:41:41.276471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.200 [2024-04-17 08:41:41.276480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:120328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.200 [2024-04-17 08:41:41.276488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.200 [2024-04-17 08:41:41.276511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.200 [2024-04-17 08:41:41.276518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.200 [2024-04-17 08:41:41.276527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:119920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.200 [2024-04-17 08:41:41.276534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.200 [2024-04-17 08:41:41.276553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.200 [2024-04-17 08:41:41.276572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.200 [2024-04-17 08:41:41.276584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:119944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.200 [2024-04-17 08:41:41.276593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.200 [2024-04-17 08:41:41.276603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:119952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.200 [2024-04-17 08:41:41.276610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.200 [2024-04-17 08:41:41.276621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:119968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.200 [2024-04-17 08:41:41.276631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.200 [2024-04-17 08:41:41.276640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:119976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.200 [2024-04-17 08:41:41.276647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.200 [2024-04-17 08:41:41.276656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:120016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.200 [2024-04-17 08:41:41.276663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.200 [2024-04-17 08:41:41.276671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:120352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.200 [2024-04-17 08:41:41.276678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.200 [2024-04-17 08:41:41.276686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:120384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.200 [2024-04-17 08:41:41.276693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.201 [2024-04-17 08:41:41.276702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:120392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.201 [2024-04-17 08:41:41.276708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.201 [2024-04-17 08:41:41.276717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:120400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.201 [2024-04-17 08:41:41.276726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.201 [2024-04-17 08:41:41.276735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:120408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.201 [2024-04-17 08:41:41.276742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.201 [2024-04-17 08:41:41.276750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.201 [2024-04-17 08:41:41.276757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.201 [2024-04-17 08:41:41.276766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:120424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.201 [2024-04-17 08:41:41.276779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.201 [2024-04-17 08:41:41.276788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:120432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.201 [2024-04-17 08:41:41.276798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.201 [2024-04-17 08:41:41.276807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.201 [2024-04-17 08:41:41.276814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.201 [2024-04-17 08:41:41.276826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.201 [2024-04-17 08:41:41.276832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.201 [2024-04-17 08:41:41.276841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.201 [2024-04-17 08:41:41.276848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.201 [2024-04-17 08:41:41.276862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:120512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.201 [2024-04-17 08:41:41.276869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.201 [2024-04-17 08:41:41.276877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.201 [2024-04-17 08:41:41.276884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.201 [2024-04-17 08:41:41.276903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:120528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.201 [2024-04-17 08:41:41.276910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.201 [2024-04-17 08:41:41.276922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:120536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.201 [2024-04-17 08:41:41.276929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.201 [2024-04-17 08:41:41.276937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.201 [2024-04-17 08:41:41.276947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.201 [2024-04-17 08:41:41.276956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:120552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.201 [2024-04-17 08:41:41.276962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.201 [2024-04-17 08:41:41.276975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:120560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.201 [2024-04-17 08:41:41.276982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.201 [2024-04-17 08:41:41.276991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:120576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.201 [2024-04-17 08:41:41.277009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.201 [2024-04-17 08:41:41.277019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:120592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:08.201 [2024-04-17 08:41:41.277034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.201 [2024-04-17 08:41:41.277047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:120600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.201 [2024-04-17 08:41:41.277055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.201 [2024-04-17 08:41:41.277063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:120024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.201 [2024-04-17 08:41:41.277077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.201 [2024-04-17 08:41:41.277089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:120032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.201 [2024-04-17 08:41:41.277098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.201 [2024-04-17 08:41:41.277107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:120048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.201 [2024-04-17 08:41:41.277122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.201 [2024-04-17 08:41:41.277134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:120056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.201 [2024-04-17 08:41:41.277144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.201 [2024-04-17 08:41:41.277152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:120072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.201 [2024-04-17 08:41:41.277159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.201 [2024-04-17 08:41:41.277169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:120104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.201 [2024-04-17 08:41:41.277176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.201 [2024-04-17 08:41:41.277188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:120120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.201 [2024-04-17 08:41:41.277194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.201 [2024-04-17 08:41:41.277203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:120136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.201 [2024-04-17 08:41:41.277212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.201 [2024-04-17 08:41:41.277232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.201 [2024-04-17 08:41:41.277250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.201 [2024-04-17 08:41:41.277259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.201 [2024-04-17 08:41:41.277266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.201 [2024-04-17 08:41:41.277274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:120624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:08.201 [2024-04-17 08:41:41.277281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.201 [2024-04-17 08:41:41.277300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:08.201 [2024-04-17 08:41:41.277310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.201 [2024-04-17 08:41:41.277319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:08.201 [2024-04-17 08:41:41.277326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.201 [2024-04-17 08:41:41.277336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.201 [2024-04-17 08:41:41.277343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.201 [2024-04-17 08:41:41.277354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:120656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.201 [2024-04-17 08:41:41.277361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.201 [2024-04-17 08:41:41.277373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:08.201 [2024-04-17 08:41:41.277379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.201 [2024-04-17 08:41:41.277411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:120672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.201 [2024-04-17 08:41:41.277419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.201 [2024-04-17 08:41:41.277435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:08.201 [2024-04-17 08:41:41.277445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.201 [2024-04-17 08:41:41.277456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:120688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.201 [2024-04-17 08:41:41.277464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.201 [2024-04-17 08:41:41.277473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:120696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.201 [2024-04-17 08:41:41.277485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.201 [2024-04-17 08:41:41.277493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:08.202 [2024-04-17 08:41:41.277500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.202 [2024-04-17 08:41:41.277511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.202 [2024-04-17 08:41:41.277519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.202 [2024-04-17 08:41:41.277529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.202 [2024-04-17 08:41:41.277535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.202 [2024-04-17 08:41:41.277551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:120728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.202 [2024-04-17 08:41:41.277570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.202 [2024-04-17 08:41:41.277579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:08.202 [2024-04-17 08:41:41.277593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.202 [2024-04-17 08:41:41.277605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.202 [2024-04-17 08:41:41.277615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.202 [2024-04-17 08:41:41.277624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:08.202 [2024-04-17 08:41:41.277631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.202 [2024-04-17 08:41:41.277640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:08.202 [2024-04-17 08:41:41.277650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.202 [2024-04-17 08:41:41.277658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:08.202 [2024-04-17 08:41:41.277667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.202 [2024-04-17 08:41:41.277676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.202 [2024-04-17 08:41:41.277683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.202 [2024-04-17 08:41:41.277693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:08.202 [2024-04-17 08:41:41.277700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.202 [2024-04-17 08:41:41.277710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.202 [2024-04-17 08:41:41.277717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.202 [2024-04-17 08:41:41.277725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:08.202 [2024-04-17 08:41:41.277734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.202 [2024-04-17 08:41:41.277745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:08.202 [2024-04-17 08:41:41.277752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.202 [2024-04-17 08:41:41.277761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:08.202 [2024-04-17 08:41:41.277775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.202 [2024-04-17 08:41:41.277806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.202 [2024-04-17 08:41:41.277813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.202 [2024-04-17 08:41:41.277830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:120832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.202 [2024-04-17 08:41:41.277837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.202 [2024-04-17 08:41:41.277846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:120840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.202 [2024-04-17 08:41:41.277863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.202 [2024-04-17 08:41:41.277879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:120848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.202 [2024-04-17 08:41:41.277889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.202 [2024-04-17 08:41:41.277900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.202 [2024-04-17 08:41:41.277906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.202 [2024-04-17 08:41:41.277918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:120160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.202 [2024-04-17 08:41:41.277926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.202 [2024-04-17 08:41:41.277939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:120168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.202 [2024-04-17 08:41:41.277955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.202 [2024-04-17 08:41:41.277983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:120184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.202 [2024-04-17 08:41:41.277993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.202 [2024-04-17 08:41:41.278009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.202 [2024-04-17 08:41:41.278018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.202 [2024-04-17 08:41:41.278028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:120200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.202 [2024-04-17 08:41:41.278036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.202 [2024-04-17 08:41:41.278050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:120248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.202 [2024-04-17 08:41:41.278067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.202 [2024-04-17 08:41:41.278077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.202 [2024-04-17 08:41:41.278098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.202 [2024-04-17 08:41:41.278123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:08.202 [2024-04-17 08:41:41.278133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.202 [2024-04-17 08:41:41.278142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:120864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.202 [2024-04-17 08:41:41.278151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.202 [2024-04-17 08:41:41.278160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:120872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.202 [2024-04-17 08:41:41.278167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.202 [2024-04-17 08:41:41.278178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:08.202 [2024-04-17 08:41:41.278185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.202 [2024-04-17 08:41:41.278199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:08.202 [2024-04-17 08:41:41.278216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.202 [2024-04-17 08:41:41.278227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:120896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.202 [2024-04-17 08:41:41.278235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.202 [2024-04-17 08:41:41.278244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:120904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.202 [2024-04-17 08:41:41.278253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.202 [2024-04-17 08:41:41.278263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:08.202 [2024-04-17 08:41:41.278270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.202 [2024-04-17 08:41:41.278287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:120272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.202 [2024-04-17 08:41:41.278299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.202 [2024-04-17 08:41:41.278308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:120288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.202 [2024-04-17 08:41:41.278315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.202 [2024-04-17 08:41:41.278331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:120312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.202 [2024-04-17 08:41:41.278341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.202 [2024-04-17 08:41:41.278352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:120336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.202 [2024-04-17 08:41:41.278359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.202 [2024-04-17 08:41:41.278372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.202 [2024-04-17 08:41:41.278382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.202 [2024-04-17 08:41:41.278402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:120360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.203 [2024-04-17 08:41:41.278411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.203 [2024-04-17 08:41:41.278421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:120368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.203 [2024-04-17 08:41:41.278427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.203 [2024-04-17 08:41:41.278443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:120376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.203 [2024-04-17 08:41:41.278454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.203 [2024-04-17 08:41:41.278465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:08.203 [2024-04-17 08:41:41.278472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.203 [2024-04-17 08:41:41.278480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:08.203 [2024-04-17 08:41:41.278495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.203 [2024-04-17 08:41:41.278504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:08.203 [2024-04-17 08:41:41.278510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.203 [2024-04-17 08:41:41.278530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:08.203 [2024-04-17 08:41:41.278536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.203 [2024-04-17 08:41:41.278547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:08.203 [2024-04-17 08:41:41.278573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.203 [2024-04-17 08:41:41.278585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:120960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.203 [2024-04-17 08:41:41.278592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.203 [2024-04-17 08:41:41.278600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.203 [2024-04-17 08:41:41.278607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.203 [2024-04-17 08:41:41.278626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:120976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.203 [2024-04-17 08:41:41.278636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.203 [2024-04-17 08:41:41.278644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:08.203 [2024-04-17 08:41:41.278651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.203 [2024-04-17 08:41:41.278662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:120992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.203 [2024-04-17 08:41:41.278669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.203 [2024-04-17 08:41:41.278680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:08.203 [2024-04-17 08:41:41.278687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.203 [2024-04-17 08:41:41.278698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:08.203 [2024-04-17 08:41:41.278705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.203 [2024-04-17 08:41:41.278719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:08.203 [2024-04-17 08:41:41.278726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.203 [2024-04-17 08:41:41.278742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:08.203 [2024-04-17 08:41:41.278759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.203 [2024-04-17 08:41:41.278775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:121032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.203 [2024-04-17 08:41:41.278784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.203 [2024-04-17 08:41:41.278795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:121040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.203 [2024-04-17 08:41:41.278802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.203 [2024-04-17 08:41:41.278811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:08.203 [2024-04-17 08:41:41.278818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.203 [2024-04-17 08:41:41.278830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:08.203 [2024-04-17 08:41:41.278837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.203 [2024-04-17 08:41:41.278853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:121064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.203 [2024-04-17 08:41:41.278870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.203 [2024-04-17 08:41:41.278879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:121072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.203 [2024-04-17 08:41:41.278885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.203 [2024-04-17 08:41:41.278908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:121080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:08.203 [2024-04-17 08:41:41.278917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.203 [2024-04-17 08:41:41.278926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:121088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.203 [2024-04-17 08:41:41.278932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.203 [2024-04-17 08:41:41.278944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:08.203 [2024-04-17 08:41:41.278951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.203 [2024-04-17 08:41:41.278962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:120440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.203 [2024-04-17 08:41:41.278968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.203 [2024-04-17 08:41:41.278984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.203 [2024-04-17 08:41:41.278994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.203 [2024-04-17 08:41:41.279005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:120456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.203 [2024-04-17 08:41:41.279013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.203 [2024-04-17 08:41:41.279021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:120480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.203 [2024-04-17 08:41:41.279039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.203 [2024-04-17 08:41:41.279048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:120488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.203 [2024-04-17 08:41:41.279070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.203 [2024-04-17 08:41:41.279093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:120496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.203 [2024-04-17 08:41:41.279101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.203 [2024-04-17 08:41:41.279124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:120568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:08.203 [2024-04-17 08:41:41.279133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.203 [2024-04-17 08:41:41.279142] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e420 is same with the state(5) to be set 00:45:08.203 [2024-04-17 08:41:41.279166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:45:08.203 [2024-04-17 08:41:41.279175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:45:08.203 [2024-04-17 08:41:41.279181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120584 len:8 PRP1 0x0 PRP2 0x0 00:45:08.203 [2024-04-17 08:41:41.279188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.203 [2024-04-17 08:41:41.279283] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e4e420 was disconnected and freed. reset controller. 00:45:08.203 [2024-04-17 08:41:41.279383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:45:08.203 [2024-04-17 08:41:41.279410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.203 [2024-04-17 08:41:41.279419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:45:08.203 [2024-04-17 08:41:41.279426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.203 [2024-04-17 08:41:41.279434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:45:08.203 [2024-04-17 08:41:41.279441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.203 [2024-04-17 08:41:41.279450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:45:08.203 [2024-04-17 08:41:41.279457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:08.204 [2024-04-17 08:41:41.279466] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07170 is same with the state(5) to be set 00:45:08.204 [2024-04-17 08:41:41.279685] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:45:08.204 [2024-04-17 08:41:41.279711] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e07170 (9): Bad file descriptor 00:45:08.204 [2024-04-17 08:41:41.284691] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e07170 (9): Bad file descriptor 00:45:08.204 [2024-04-17 08:41:41.284726] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:45:08.204 [2024-04-17 08:41:41.284735] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:45:08.204 [2024-04-17 08:41:41.284744] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:45:08.204 [2024-04-17 08:41:41.284765] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:45:08.204 [2024-04-17 08:41:41.284774] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:45:08.204 08:41:41 -- host/timeout.sh@90 -- # sleep 1 00:45:09.138 [2024-04-17 08:41:42.283026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:45:09.138 [2024-04-17 08:41:42.283132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:45:09.138 [2024-04-17 08:41:42.283145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e07170 with addr=10.0.0.2, port=4420 00:45:09.138 [2024-04-17 08:41:42.283157] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07170 is same with the state(5) to be set 00:45:09.138 [2024-04-17 08:41:42.283185] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e07170 (9): Bad file descriptor 00:45:09.138 [2024-04-17 08:41:42.283202] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:45:09.138 [2024-04-17 08:41:42.283210] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:45:09.138 [2024-04-17 08:41:42.283219] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:45:09.138 [2024-04-17 08:41:42.283248] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:45:09.138 [2024-04-17 08:41:42.283257] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:45:09.138 08:41:42 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:09.396 [2024-04-17 08:41:42.535943] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:09.396 08:41:42 -- host/timeout.sh@92 -- # wait 87934 00:45:10.329 [2024-04-17 08:41:43.301810] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:45:16.932 00:45:16.932 Latency(us) 00:45:16.932 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:16.932 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:45:16.932 Verification LBA range: start 0x0 length 0x4000 00:45:16.932 NVMe0n1 : 10.01 9491.58 37.08 0.00 0.00 13462.99 1531.08 3018433.62 00:45:16.932 =================================================================================================================== 00:45:16.932 Total : 9491.58 37.08 0.00 0.00 13462.99 1531.08 3018433.62 00:45:16.932 0 00:45:16.932 08:41:50 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:45:16.932 08:41:50 -- host/timeout.sh@97 -- # rpc_pid=88051 00:45:16.932 08:41:50 -- host/timeout.sh@98 -- # sleep 1 00:45:17.190 Running I/O for 10 seconds... 00:45:18.129 08:41:51 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:18.129 [2024-04-17 08:41:51.377054] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377121] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377127] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377133] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377139] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377145] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377152] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377169] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377175] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377191] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377196] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377202] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377207] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377219] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377225] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377241] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377246] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377251] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377262] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377267] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377273] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377289] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377294] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377305] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377327] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377333] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.129 [2024-04-17 08:41:51.377339] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.130 [2024-04-17 08:41:51.377345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.130 [2024-04-17 08:41:51.377350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.130 [2024-04-17 08:41:51.377355] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.130 [2024-04-17 08:41:51.377361] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.130 [2024-04-17 08:41:51.377367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.130 [2024-04-17 08:41:51.377372] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.130 [2024-04-17 08:41:51.377378] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.130 [2024-04-17 08:41:51.377383] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.130 [2024-04-17 08:41:51.377388] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.130 [2024-04-17 08:41:51.377447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.130 [2024-04-17 08:41:51.377455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.130 [2024-04-17 08:41:51.377461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.130 [2024-04-17 08:41:51.377480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf0b0 is same with the state(5) to be set 00:45:18.130 [2024-04-17 08:41:51.377720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:112336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.130 [2024-04-17 08:41:51.377760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.130 [2024-04-17 08:41:51.377782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.130 [2024-04-17 08:41:51.377790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.130 [2024-04-17 08:41:51.377800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:112360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.130 [2024-04-17 08:41:51.377806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.130 [2024-04-17 08:41:51.377815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:112376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.130 [2024-04-17 08:41:51.377821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.130 [2024-04-17 08:41:51.377830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:111736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.130 [2024-04-17 08:41:51.377838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.130 [2024-04-17 08:41:51.377859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:111752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.130 [2024-04-17 08:41:51.377866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.130 [2024-04-17 08:41:51.377877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:111760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.130 [2024-04-17 08:41:51.377884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.130 [2024-04-17 08:41:51.377897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.130 [2024-04-17 08:41:51.377904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.130 [2024-04-17 08:41:51.377913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.130 [2024-04-17 08:41:51.377919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.130 [2024-04-17 08:41:51.377928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:111816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.130 [2024-04-17 08:41:51.377934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.130 [2024-04-17 08:41:51.377942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:111856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.130 [2024-04-17 08:41:51.377952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.130 [2024-04-17 08:41:51.377961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:111864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.130 [2024-04-17 08:41:51.377967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.130 [2024-04-17 08:41:51.377975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.130 [2024-04-17 08:41:51.377981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.130 [2024-04-17 08:41:51.377989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:112440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.130 [2024-04-17 08:41:51.377996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.130 [2024-04-17 08:41:51.378021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:112464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.130 [2024-04-17 08:41:51.378028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.130 [2024-04-17 08:41:51.378036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:112480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.130 [2024-04-17 08:41:51.378043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.130 [2024-04-17 08:41:51.378052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:112496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.130 [2024-04-17 08:41:51.378061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.130 [2024-04-17 08:41:51.378070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:111880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.130 [2024-04-17 08:41:51.378077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.130 [2024-04-17 08:41:51.378085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:111904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.130 [2024-04-17 08:41:51.378096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.130 [2024-04-17 08:41:51.378105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:111912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.130 [2024-04-17 08:41:51.378111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.130 [2024-04-17 08:41:51.378120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:111920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.130 [2024-04-17 08:41:51.378126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.130 [2024-04-17 08:41:51.378138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:111928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.130 [2024-04-17 08:41:51.378145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.130 [2024-04-17 08:41:51.378154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:111936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.130 [2024-04-17 08:41:51.378160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.130 [2024-04-17 08:41:51.378169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:111952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.130 [2024-04-17 08:41:51.378175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.130 [2024-04-17 08:41:51.378184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:111960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.130 [2024-04-17 08:41:51.378194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.130 [2024-04-17 08:41:51.378203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:112504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.130 [2024-04-17 08:41:51.378209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.130 [2024-04-17 08:41:51.378221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:112512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.130 [2024-04-17 08:41:51.378228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.130 [2024-04-17 08:41:51.378236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:112520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.130 [2024-04-17 08:41:51.378244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.130 [2024-04-17 08:41:51.378252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:112536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.130 [2024-04-17 08:41:51.378258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.130 [2024-04-17 08:41:51.378270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:112544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.130 [2024-04-17 08:41:51.378277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.130 [2024-04-17 08:41:51.378285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:112560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.130 [2024-04-17 08:41:51.378292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.130 [2024-04-17 08:41:51.378300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:112576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.130 [2024-04-17 08:41:51.378310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.130 [2024-04-17 08:41:51.378319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:111984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.130 [2024-04-17 08:41:51.378325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.130 [2024-04-17 08:41:51.378333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:111992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.131 [2024-04-17 08:41:51.378340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.131 [2024-04-17 08:41:51.378348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:112000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.131 [2024-04-17 08:41:51.378359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.131 [2024-04-17 08:41:51.378367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.131 [2024-04-17 08:41:51.378374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.131 [2024-04-17 08:41:51.378382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:112056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.131 [2024-04-17 08:41:51.378401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.131 [2024-04-17 08:41:51.378411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:112064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.131 [2024-04-17 08:41:51.378418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.131 [2024-04-17 08:41:51.378426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.131 [2024-04-17 08:41:51.378433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.131 [2024-04-17 08:41:51.378441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:112112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.131 [2024-04-17 08:41:51.378447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.131 [2024-04-17 08:41:51.378459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:18.131 [2024-04-17 08:41:51.378466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.131 [2024-04-17 08:41:51.378474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:112592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:18.131 [2024-04-17 08:41:51.378481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.131 [2024-04-17 08:41:51.378489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:112600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:18.131 [2024-04-17 08:41:51.378499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.131 [2024-04-17 08:41:51.378508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:112608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:18.131 [2024-04-17 08:41:51.378514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.131 [2024-04-17 08:41:51.378522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:112616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:18.131 [2024-04-17 08:41:51.378529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.131 [2024-04-17 08:41:51.378537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:112120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.131 [2024-04-17 08:41:51.378547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.131 [2024-04-17 08:41:51.378556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:112128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.131 [2024-04-17 08:41:51.378562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.131 [2024-04-17 08:41:51.378571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:112136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.131 [2024-04-17 08:41:51.378577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.131 [2024-04-17 08:41:51.378589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:112144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.131 [2024-04-17 08:41:51.378596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.131 [2024-04-17 08:41:51.378605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:112152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.131 [2024-04-17 08:41:51.378612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.131 [2024-04-17 08:41:51.378624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:112160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.131 [2024-04-17 08:41:51.378631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.131 [2024-04-17 08:41:51.378639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:112184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.131 [2024-04-17 08:41:51.378646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.131 [2024-04-17 08:41:51.378655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:112192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.131 [2024-04-17 08:41:51.378661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.131 [2024-04-17 08:41:51.378669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:112624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:18.131 [2024-04-17 08:41:51.378680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.131 [2024-04-17 08:41:51.378689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:112632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.131 [2024-04-17 08:41:51.378696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.131 [2024-04-17 08:41:51.378704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.131 [2024-04-17 08:41:51.378713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.131 [2024-04-17 08:41:51.378724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.131 [2024-04-17 08:41:51.378730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.131 [2024-04-17 08:41:51.378738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:112656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:18.131 [2024-04-17 08:41:51.378745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.131 [2024-04-17 08:41:51.378753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.131 [2024-04-17 08:41:51.378764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.131 [2024-04-17 08:41:51.378772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.131 [2024-04-17 08:41:51.378779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.131 [2024-04-17 08:41:51.378787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:112680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.131 [2024-04-17 08:41:51.378797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.131 [2024-04-17 08:41:51.378805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:112688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:18.131 [2024-04-17 08:41:51.378812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.131 [2024-04-17 08:41:51.378820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:18.131 [2024-04-17 08:41:51.378828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.131 [2024-04-17 08:41:51.378836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:112704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:18.131 [2024-04-17 08:41:51.378853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.131 [2024-04-17 08:41:51.378864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:112712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:18.131 [2024-04-17 08:41:51.378875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.131 [2024-04-17 08:41:51.378887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:18.131 [2024-04-17 08:41:51.378894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.131 [2024-04-17 08:41:51.378902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:112728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.131 [2024-04-17 08:41:51.378909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.131 [2024-04-17 08:41:51.378917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:112736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.131 [2024-04-17 08:41:51.378923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.131 [2024-04-17 08:41:51.378935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:112744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:18.131 [2024-04-17 08:41:51.378942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.131 [2024-04-17 08:41:51.378950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:18.131 [2024-04-17 08:41:51.378957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.131 [2024-04-17 08:41:51.378969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:18.131 [2024-04-17 08:41:51.378976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.131 [2024-04-17 08:41:51.378984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:18.131 [2024-04-17 08:41:51.378991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.131 [2024-04-17 08:41:51.378999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:112776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:18.131 [2024-04-17 08:41:51.379005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.131 [2024-04-17 08:41:51.379014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:112784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.132 [2024-04-17 08:41:51.379024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.132 [2024-04-17 08:41:51.379032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:112792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.132 [2024-04-17 08:41:51.379039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.132 [2024-04-17 08:41:51.379047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:112800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:18.132 [2024-04-17 08:41:51.379057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.132 [2024-04-17 08:41:51.379066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:18.132 [2024-04-17 08:41:51.379072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.132 [2024-04-17 08:41:51.379080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:112816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:18.132 [2024-04-17 08:41:51.379091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.132 [2024-04-17 08:41:51.379100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:18.132 [2024-04-17 08:41:51.379107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.132 [2024-04-17 08:41:51.379115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:112832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.132 [2024-04-17 08:41:51.379122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.132 [2024-04-17 08:41:51.379135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:18.132 [2024-04-17 08:41:51.379142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.132 [2024-04-17 08:41:51.379150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.132 [2024-04-17 08:41:51.379157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.132 [2024-04-17 08:41:51.379169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:18.132 [2024-04-17 08:41:51.379175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.132 [2024-04-17 08:41:51.379183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:112864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.132 [2024-04-17 08:41:51.379190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.132 [2024-04-17 08:41:51.379201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.132 [2024-04-17 08:41:51.379209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.132 [2024-04-17 08:41:51.379217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:18.132 [2024-04-17 08:41:51.379224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.132 [2024-04-17 08:41:51.379232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:112200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.132 [2024-04-17 08:41:51.379242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.132 [2024-04-17 08:41:51.379251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.132 [2024-04-17 08:41:51.379257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.132 [2024-04-17 08:41:51.379265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:112216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.132 [2024-04-17 08:41:51.379272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.132 [2024-04-17 08:41:51.379285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:112248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.132 [2024-04-17 08:41:51.379291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.132 [2024-04-17 08:41:51.379299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:112264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.132 [2024-04-17 08:41:51.379306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.132 [2024-04-17 08:41:51.379317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:112272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.132 [2024-04-17 08:41:51.379325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.132 [2024-04-17 08:41:51.379334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.132 [2024-04-17 08:41:51.379340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.132 [2024-04-17 08:41:51.379352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:112320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.132 [2024-04-17 08:41:51.379359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.132 [2024-04-17 08:41:51.379367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:112888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.132 [2024-04-17 08:41:51.379373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.132 [2024-04-17 08:41:51.379381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:112896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.132 [2024-04-17 08:41:51.379391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.132 [2024-04-17 08:41:51.379426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:112904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:18.132 [2024-04-17 08:41:51.379433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.132 [2024-04-17 08:41:51.379441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.132 [2024-04-17 08:41:51.379448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.132 [2024-04-17 08:41:51.379460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.132 [2024-04-17 08:41:51.379467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.132 [2024-04-17 08:41:51.379475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.132 [2024-04-17 08:41:51.379481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.132 [2024-04-17 08:41:51.379489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:112936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:18.132 [2024-04-17 08:41:51.379495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.132 [2024-04-17 08:41:51.379516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.132 [2024-04-17 08:41:51.379523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.132 [2024-04-17 08:41:51.379535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:112952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:18.132 [2024-04-17 08:41:51.379542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.132 [2024-04-17 08:41:51.379549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:112960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:18.132 [2024-04-17 08:41:51.379559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.132 [2024-04-17 08:41:51.379568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:112328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.132 [2024-04-17 08:41:51.379574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.132 [2024-04-17 08:41:51.379582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.132 [2024-04-17 08:41:51.379589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.132 [2024-04-17 08:41:51.379600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:112368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.132 [2024-04-17 08:41:51.379607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.132 [2024-04-17 08:41:51.379615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:112384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.132 [2024-04-17 08:41:51.379621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.132 [2024-04-17 08:41:51.379633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.132 [2024-04-17 08:41:51.379640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.132 [2024-04-17 08:41:51.379648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:112400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.132 [2024-04-17 08:41:51.379655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.132 [2024-04-17 08:41:51.379663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.132 [2024-04-17 08:41:51.379673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.132 [2024-04-17 08:41:51.379682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:112416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.132 [2024-04-17 08:41:51.379688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.132 [2024-04-17 08:41:51.379697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:112968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.132 [2024-04-17 08:41:51.379706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.132 [2024-04-17 08:41:51.379715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.133 [2024-04-17 08:41:51.379722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.133 [2024-04-17 08:41:51.379730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:112984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:18.133 [2024-04-17 08:41:51.379737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.133 [2024-04-17 08:41:51.379745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:112992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:18.133 [2024-04-17 08:41:51.379752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.133 [2024-04-17 08:41:51.379771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.133 [2024-04-17 08:41:51.379783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.133 [2024-04-17 08:41:51.379791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:113008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.133 [2024-04-17 08:41:51.379801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.133 [2024-04-17 08:41:51.379810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:45:18.133 [2024-04-17 08:41:51.379816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.133 [2024-04-17 08:41:51.379828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:113024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.133 [2024-04-17 08:41:51.379835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.133 [2024-04-17 08:41:51.379844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:112424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.133 [2024-04-17 08:41:51.379850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.133 [2024-04-17 08:41:51.379858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:112448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.133 [2024-04-17 08:41:51.379865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.133 [2024-04-17 08:41:51.379877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.133 [2024-04-17 08:41:51.379884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.133 [2024-04-17 08:41:51.379893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:112472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.133 [2024-04-17 08:41:51.379899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.133 [2024-04-17 08:41:51.379911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:112488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.133 [2024-04-17 08:41:51.379919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.133 [2024-04-17 08:41:51.379927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.133 [2024-04-17 08:41:51.379934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.133 [2024-04-17 08:41:51.379946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:112552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:18.133 [2024-04-17 08:41:51.379953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.133 [2024-04-17 08:41:51.379961] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e80b90 is same with the state(5) to be set 00:45:18.133 [2024-04-17 08:41:51.379971] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:45:18.133 [2024-04-17 08:41:51.379976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:45:18.133 [2024-04-17 08:41:51.379990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112568 len:8 PRP1 0x0 PRP2 0x0 00:45:18.133 [2024-04-17 08:41:51.379997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:18.133 [2024-04-17 08:41:51.380047] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e80b90 was disconnected and freed. reset controller. 00:45:18.133 [2024-04-17 08:41:51.380285] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:45:18.133 [2024-04-17 08:41:51.380362] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e07170 (9): Bad file descriptor 00:45:18.133 [2024-04-17 08:41:51.380466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:45:18.133 [2024-04-17 08:41:51.380498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:45:18.133 [2024-04-17 08:41:51.380508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e07170 with addr=10.0.0.2, port=4420 00:45:18.133 [2024-04-17 08:41:51.380516] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07170 is same with the state(5) to be set 00:45:18.133 [2024-04-17 08:41:51.380531] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e07170 (9): Bad file descriptor 00:45:18.133 [2024-04-17 08:41:51.380543] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:45:18.133 [2024-04-17 08:41:51.380554] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:45:18.133 [2024-04-17 08:41:51.380562] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:45:18.133 [2024-04-17 08:41:51.380581] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:45:18.133 [2024-04-17 08:41:51.380589] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:45:18.133 08:41:51 -- host/timeout.sh@101 -- # sleep 3 00:45:19.151 [2024-04-17 08:41:52.378833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:45:19.151 [2024-04-17 08:41:52.378958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:45:19.151 [2024-04-17 08:41:52.378973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e07170 with addr=10.0.0.2, port=4420 00:45:19.151 [2024-04-17 08:41:52.378986] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07170 is same with the state(5) to be set 00:45:19.151 [2024-04-17 08:41:52.379011] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e07170 (9): Bad file descriptor 00:45:19.151 [2024-04-17 08:41:52.379027] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:45:19.151 [2024-04-17 08:41:52.379035] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:45:19.151 [2024-04-17 08:41:52.379043] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:45:19.151 [2024-04-17 08:41:52.379066] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:45:19.151 [2024-04-17 08:41:52.379074] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:45:20.085 [2024-04-17 08:41:53.377308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:45:20.085 [2024-04-17 08:41:53.377401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:45:20.085 [2024-04-17 08:41:53.377416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e07170 with addr=10.0.0.2, port=4420 00:45:20.085 [2024-04-17 08:41:53.377427] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07170 is same with the state(5) to be set 00:45:20.085 [2024-04-17 08:41:53.377452] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e07170 (9): Bad file descriptor 00:45:20.085 [2024-04-17 08:41:53.377467] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:45:20.085 [2024-04-17 08:41:53.377475] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:45:20.085 [2024-04-17 08:41:53.377483] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:45:20.085 [2024-04-17 08:41:53.377507] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:45:20.085 [2024-04-17 08:41:53.377514] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:45:21.458 [2024-04-17 08:41:54.377513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:45:21.458 [2024-04-17 08:41:54.377607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:45:21.458 [2024-04-17 08:41:54.377620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e07170 with addr=10.0.0.2, port=4420 00:45:21.458 [2024-04-17 08:41:54.377631] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07170 is same with the state(5) to be set 00:45:21.458 [2024-04-17 08:41:54.377759] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e07170 (9): Bad file descriptor 00:45:21.458 [2024-04-17 08:41:54.377916] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:45:21.458 [2024-04-17 08:41:54.377937] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:45:21.458 [2024-04-17 08:41:54.377988] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:45:21.458 [2024-04-17 08:41:54.380419] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:45:21.458 [2024-04-17 08:41:54.380459] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:45:21.458 08:41:54 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:21.458 [2024-04-17 08:41:54.675889] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:21.458 08:41:54 -- host/timeout.sh@103 -- # wait 88051 00:45:22.393 [2024-04-17 08:41:55.396113] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:45:27.745 00:45:27.746 Latency(us) 00:45:27.746 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:27.746 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:45:27.746 Verification LBA range: start 0x0 length 0x4000 00:45:27.746 NVMe0n1 : 10.01 7988.62 31.21 6416.76 0.00 8871.47 579.52 3018433.62 00:45:27.746 =================================================================================================================== 00:45:27.746 Total : 7988.62 31.21 6416.76 0.00 8871.47 0.00 3018433.62 00:45:27.746 0 00:45:27.746 08:42:00 -- host/timeout.sh@105 -- # killprocess 87885 00:45:27.746 08:42:00 -- common/autotest_common.sh@926 -- # '[' -z 87885 ']' 00:45:27.746 08:42:00 -- common/autotest_common.sh@930 -- # kill -0 87885 00:45:27.746 08:42:00 -- common/autotest_common.sh@931 -- # uname 00:45:27.746 08:42:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:45:27.746 08:42:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87885 00:45:27.746 08:42:00 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:45:27.746 08:42:00 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:45:27.746 killing process with pid 87885 00:45:27.746 08:42:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87885' 00:45:27.746 08:42:00 -- common/autotest_common.sh@945 -- # kill 87885 00:45:27.746 Received shutdown signal, test time was about 10.000000 seconds 00:45:27.746 00:45:27.746 Latency(us) 00:45:27.746 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:27.746 =================================================================================================================== 00:45:27.746 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:27.746 08:42:00 -- common/autotest_common.sh@950 -- # wait 87885 00:45:27.746 08:42:00 -- host/timeout.sh@110 -- # bdevperf_pid=88172 00:45:27.746 08:42:00 -- host/timeout.sh@112 -- # waitforlisten 88172 /var/tmp/bdevperf.sock 00:45:27.746 08:42:00 -- common/autotest_common.sh@819 -- # '[' -z 88172 ']' 00:45:27.746 08:42:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:45:27.746 08:42:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:45:27.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:45:27.746 08:42:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:45:27.746 08:42:00 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:45:27.746 08:42:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:45:27.746 08:42:00 -- common/autotest_common.sh@10 -- # set +x 00:45:27.746 [2024-04-17 08:42:00.591625] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:45:27.746 [2024-04-17 08:42:00.591705] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88172 ] 00:45:27.746 [2024-04-17 08:42:00.730622] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:27.746 [2024-04-17 08:42:00.856255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:45:28.312 08:42:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:45:28.312 08:42:01 -- common/autotest_common.sh@852 -- # return 0 00:45:28.312 08:42:01 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88172 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:45:28.312 08:42:01 -- host/timeout.sh@116 -- # dtrace_pid=88200 00:45:28.312 08:42:01 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:45:28.569 08:42:01 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:45:28.825 NVMe0n1 00:45:28.825 08:42:02 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:45:28.825 08:42:02 -- host/timeout.sh@124 -- # rpc_pid=88259 00:45:28.825 08:42:02 -- host/timeout.sh@125 -- # sleep 1 00:45:29.082 Running I/O for 10 seconds... 00:45:30.021 08:42:03 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:30.021 [2024-04-17 08:42:03.346987] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.021 [2024-04-17 08:42:03.347037] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.021 [2024-04-17 08:42:03.347046] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.021 [2024-04-17 08:42:03.347052] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.021 [2024-04-17 08:42:03.347058] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.021 [2024-04-17 08:42:03.347064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.021 [2024-04-17 08:42:03.347070] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.021 [2024-04-17 08:42:03.347076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.021 [2024-04-17 08:42:03.347082] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.021 [2024-04-17 08:42:03.347088] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.021 [2024-04-17 08:42:03.347093] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.021 [2024-04-17 08:42:03.347099] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.021 [2024-04-17 08:42:03.347104] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.021 [2024-04-17 08:42:03.347110] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.021 [2024-04-17 08:42:03.347115] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.021 [2024-04-17 08:42:03.347121] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.021 [2024-04-17 08:42:03.347126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.021 [2024-04-17 08:42:03.347131] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.021 [2024-04-17 08:42:03.347137] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.021 [2024-04-17 08:42:03.347142] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.022 [2024-04-17 08:42:03.347148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.022 [2024-04-17 08:42:03.347153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.022 [2024-04-17 08:42:03.347159] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.022 [2024-04-17 08:42:03.347164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.022 [2024-04-17 08:42:03.347170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.022 [2024-04-17 08:42:03.347176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.022 [2024-04-17 08:42:03.347181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.022 [2024-04-17 08:42:03.347187] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.022 [2024-04-17 08:42:03.347192] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.022 [2024-04-17 08:42:03.347198] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.022 [2024-04-17 08:42:03.347203] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.022 [2024-04-17 08:42:03.347209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.022 [2024-04-17 08:42:03.347214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.022 [2024-04-17 08:42:03.347220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.022 [2024-04-17 08:42:03.347226] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.022 [2024-04-17 08:42:03.347232] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.022 [2024-04-17 08:42:03.347237] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.022 [2024-04-17 08:42:03.347243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.022 [2024-04-17 08:42:03.347249] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.022 [2024-04-17 08:42:03.347254] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.022 [2024-04-17 08:42:03.347260] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.022 [2024-04-17 08:42:03.347265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.022 [2024-04-17 08:42:03.347271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd27e0 is same with the state(5) to be set 00:45:30.022 [2024-04-17 08:42:03.347729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:94360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.022 [2024-04-17 08:42:03.347766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.022 [2024-04-17 08:42:03.347783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.022 [2024-04-17 08:42:03.347791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.022 [2024-04-17 08:42:03.347800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.022 [2024-04-17 08:42:03.347807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.022 [2024-04-17 08:42:03.347819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.022 [2024-04-17 08:42:03.347826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.022 [2024-04-17 08:42:03.347837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:71456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.022 [2024-04-17 08:42:03.347844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.022 [2024-04-17 08:42:03.347852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.022 [2024-04-17 08:42:03.347859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.022 [2024-04-17 08:42:03.347879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:69144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.022 [2024-04-17 08:42:03.347888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.022 [2024-04-17 08:42:03.347898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.022 [2024-04-17 08:42:03.347905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.022 [2024-04-17 08:42:03.347914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:118200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.022 [2024-04-17 08:42:03.347921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.022 [2024-04-17 08:42:03.347931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:127336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.022 [2024-04-17 08:42:03.347938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.022 [2024-04-17 08:42:03.347949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.022 [2024-04-17 08:42:03.347955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.022 [2024-04-17 08:42:03.347964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:57656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.022 [2024-04-17 08:42:03.347972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.022 [2024-04-17 08:42:03.347981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:39320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.022 [2024-04-17 08:42:03.347990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.022 [2024-04-17 08:42:03.347998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.022 [2024-04-17 08:42:03.348005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.022 [2024-04-17 08:42:03.348015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:56400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.022 [2024-04-17 08:42:03.348022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.022 [2024-04-17 08:42:03.348030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:93576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.022 [2024-04-17 08:42:03.348038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.022 [2024-04-17 08:42:03.348047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:56720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.022 [2024-04-17 08:42:03.348057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.022 [2024-04-17 08:42:03.348065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.022 [2024-04-17 08:42:03.348072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.022 [2024-04-17 08:42:03.348082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:54248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.022 [2024-04-17 08:42:03.348089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.022 [2024-04-17 08:42:03.348097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:121920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.022 [2024-04-17 08:42:03.348103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.022 [2024-04-17 08:42:03.348114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:125400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.022 [2024-04-17 08:42:03.348121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.022 [2024-04-17 08:42:03.348131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.022 [2024-04-17 08:42:03.348137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.022 [2024-04-17 08:42:03.348146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.022 [2024-04-17 08:42:03.348152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.022 [2024-04-17 08:42:03.348162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.022 [2024-04-17 08:42:03.348169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.023 [2024-04-17 08:42:03.348177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:68672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.023 [2024-04-17 08:42:03.348186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.023 [2024-04-17 08:42:03.348194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:105480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.023 [2024-04-17 08:42:03.348200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.023 [2024-04-17 08:42:03.348211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.023 [2024-04-17 08:42:03.348217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.023 [2024-04-17 08:42:03.348228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:44520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.023 [2024-04-17 08:42:03.348234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.023 [2024-04-17 08:42:03.348242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:114696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.023 [2024-04-17 08:42:03.348251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.023 [2024-04-17 08:42:03.348259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:60488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.023 [2024-04-17 08:42:03.348268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.023 [2024-04-17 08:42:03.348276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.023 [2024-04-17 08:42:03.348283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.023 [2024-04-17 08:42:03.348293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.023 [2024-04-17 08:42:03.348300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.023 [2024-04-17 08:42:03.348308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.023 [2024-04-17 08:42:03.348315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.023 [2024-04-17 08:42:03.348326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:50832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.023 [2024-04-17 08:42:03.348332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.023 [2024-04-17 08:42:03.348343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:53352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.023 [2024-04-17 08:42:03.348349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.023 [2024-04-17 08:42:03.348357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.023 [2024-04-17 08:42:03.348366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.023 [2024-04-17 08:42:03.348375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.023 [2024-04-17 08:42:03.348381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.023 [2024-04-17 08:42:03.348391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:32520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.023 [2024-04-17 08:42:03.348410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.023 [2024-04-17 08:42:03.348418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.023 [2024-04-17 08:42:03.348427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.023 [2024-04-17 08:42:03.348435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:37176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.023 [2024-04-17 08:42:03.348441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.023 [2024-04-17 08:42:03.348452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.023 [2024-04-17 08:42:03.348458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.023 [2024-04-17 08:42:03.348466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:31584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.023 [2024-04-17 08:42:03.348475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.023 [2024-04-17 08:42:03.348484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.023 [2024-04-17 08:42:03.348492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.023 [2024-04-17 08:42:03.348501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.023 [2024-04-17 08:42:03.348507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.023 [2024-04-17 08:42:03.348518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:71312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.023 [2024-04-17 08:42:03.348525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.023 [2024-04-17 08:42:03.348533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.023 [2024-04-17 08:42:03.348541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.023 [2024-04-17 08:42:03.348550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.023 [2024-04-17 08:42:03.348556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.023 [2024-04-17 08:42:03.348566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:44488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.023 [2024-04-17 08:42:03.348572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.023 [2024-04-17 08:42:03.348581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.023 [2024-04-17 08:42:03.348589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.023 [2024-04-17 08:42:03.348598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.023 [2024-04-17 08:42:03.348606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.023 [2024-04-17 08:42:03.348614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.023 [2024-04-17 08:42:03.348620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.023 [2024-04-17 08:42:03.348632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:122264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.023 [2024-04-17 08:42:03.348638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.023 [2024-04-17 08:42:03.348649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.023 [2024-04-17 08:42:03.348655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.023 [2024-04-17 08:42:03.348666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:71248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.023 [2024-04-17 08:42:03.348672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.023 [2024-04-17 08:42:03.348680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:54120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.294 [2024-04-17 08:42:03.348689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.294 [2024-04-17 08:42:03.348697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:90584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.294 [2024-04-17 08:42:03.348704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.294 [2024-04-17 08:42:03.348715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:54792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.294 [2024-04-17 08:42:03.348721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.294 [2024-04-17 08:42:03.348729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:34920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.294 [2024-04-17 08:42:03.348742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.294 [2024-04-17 08:42:03.348752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:121224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.294 [2024-04-17 08:42:03.348759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.294 [2024-04-17 08:42:03.348767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:87256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.294 [2024-04-17 08:42:03.348775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.294 [2024-04-17 08:42:03.348784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.294 [2024-04-17 08:42:03.348790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.294 [2024-04-17 08:42:03.348801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:37136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.294 [2024-04-17 08:42:03.348808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.294 [2024-04-17 08:42:03.348816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:125672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.294 [2024-04-17 08:42:03.348825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.294 [2024-04-17 08:42:03.348833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.294 [2024-04-17 08:42:03.348842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.294 [2024-04-17 08:42:03.348850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.294 [2024-04-17 08:42:03.348857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.294 [2024-04-17 08:42:03.348867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.294 [2024-04-17 08:42:03.348873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.294 [2024-04-17 08:42:03.348883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:69872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.294 [2024-04-17 08:42:03.348890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.294 [2024-04-17 08:42:03.348898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.294 [2024-04-17 08:42:03.348906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.294 [2024-04-17 08:42:03.348915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.294 [2024-04-17 08:42:03.348921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.294 [2024-04-17 08:42:03.348931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.294 [2024-04-17 08:42:03.348938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.294 [2024-04-17 08:42:03.348948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:109232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.294 [2024-04-17 08:42:03.348955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.294 [2024-04-17 08:42:03.348963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:72952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.294 [2024-04-17 08:42:03.348972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.294 [2024-04-17 08:42:03.348980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.294 [2024-04-17 08:42:03.348988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.294 [2024-04-17 08:42:03.348996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.294 [2024-04-17 08:42:03.349005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.295 [2024-04-17 08:42:03.349015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:122040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.295 [2024-04-17 08:42:03.349021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.295 [2024-04-17 08:42:03.349031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.295 [2024-04-17 08:42:03.349038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.295 [2024-04-17 08:42:03.349046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:88104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.295 [2024-04-17 08:42:03.349055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.295 [2024-04-17 08:42:03.349063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:127064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.295 [2024-04-17 08:42:03.349073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.295 [2024-04-17 08:42:03.349082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.295 [2024-04-17 08:42:03.349088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.295 [2024-04-17 08:42:03.349099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:58968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.295 [2024-04-17 08:42:03.349105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.295 [2024-04-17 08:42:03.349113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.295 [2024-04-17 08:42:03.349122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.295 [2024-04-17 08:42:03.349130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:33616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.295 [2024-04-17 08:42:03.349136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.295 [2024-04-17 08:42:03.349146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:87808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.295 [2024-04-17 08:42:03.349153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.295 [2024-04-17 08:42:03.349163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:45128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.295 [2024-04-17 08:42:03.349170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.295 [2024-04-17 08:42:03.349178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:56120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.295 [2024-04-17 08:42:03.349186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.295 [2024-04-17 08:42:03.349194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.295 [2024-04-17 08:42:03.349203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.295 [2024-04-17 08:42:03.349211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.295 [2024-04-17 08:42:03.349217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.295 [2024-04-17 08:42:03.349227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:93112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.295 [2024-04-17 08:42:03.349233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.295 [2024-04-17 08:42:03.349242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.295 [2024-04-17 08:42:03.349250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.295 [2024-04-17 08:42:03.349258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:28568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.295 [2024-04-17 08:42:03.349266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.295 [2024-04-17 08:42:03.349276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.295 [2024-04-17 08:42:03.349283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.295 [2024-04-17 08:42:03.349293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:57112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.295 [2024-04-17 08:42:03.349300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.295 [2024-04-17 08:42:03.349308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.295 [2024-04-17 08:42:03.349317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.295 [2024-04-17 08:42:03.349325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.295 [2024-04-17 08:42:03.349335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.295 [2024-04-17 08:42:03.349343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:90456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.295 [2024-04-17 08:42:03.349350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.295 [2024-04-17 08:42:03.349361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.295 [2024-04-17 08:42:03.349367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.295 [2024-04-17 08:42:03.349378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:30040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.295 [2024-04-17 08:42:03.349384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.295 [2024-04-17 08:42:03.349401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.295 [2024-04-17 08:42:03.349409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.295 [2024-04-17 08:42:03.349417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.295 [2024-04-17 08:42:03.349423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.295 [2024-04-17 08:42:03.349434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:108192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.295 [2024-04-17 08:42:03.349441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.295 [2024-04-17 08:42:03.349451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:91688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.295 [2024-04-17 08:42:03.349457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.295 [2024-04-17 08:42:03.349465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.295 [2024-04-17 08:42:03.349474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.295 [2024-04-17 08:42:03.349482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.295 [2024-04-17 08:42:03.349489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.295 [2024-04-17 08:42:03.349497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.295 [2024-04-17 08:42:03.349506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.295 [2024-04-17 08:42:03.349514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:45928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.295 [2024-04-17 08:42:03.349523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.295 [2024-04-17 08:42:03.349531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.295 [2024-04-17 08:42:03.349538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.295 [2024-04-17 08:42:03.349548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:27168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.295 [2024-04-17 08:42:03.349555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.296 [2024-04-17 08:42:03.349563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.296 [2024-04-17 08:42:03.349572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.296 [2024-04-17 08:42:03.349580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:116304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.296 [2024-04-17 08:42:03.349587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.296 [2024-04-17 08:42:03.349597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.296 [2024-04-17 08:42:03.349604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.296 [2024-04-17 08:42:03.349614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:56560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.296 [2024-04-17 08:42:03.349621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.296 [2024-04-17 08:42:03.349629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:130888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.296 [2024-04-17 08:42:03.349637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.296 [2024-04-17 08:42:03.349645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:107960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.296 [2024-04-17 08:42:03.349651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.296 [2024-04-17 08:42:03.349662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:76272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.296 [2024-04-17 08:42:03.349668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.296 [2024-04-17 08:42:03.349678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.296 [2024-04-17 08:42:03.349684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.296 [2024-04-17 08:42:03.349692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.296 [2024-04-17 08:42:03.349700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.296 [2024-04-17 08:42:03.349708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:124360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.296 [2024-04-17 08:42:03.349714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.296 [2024-04-17 08:42:03.349724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:55800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.296 [2024-04-17 08:42:03.349731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.296 [2024-04-17 08:42:03.349739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:65360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.296 [2024-04-17 08:42:03.349747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.296 [2024-04-17 08:42:03.349756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.296 [2024-04-17 08:42:03.349765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.296 [2024-04-17 08:42:03.349773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:129112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.296 [2024-04-17 08:42:03.349781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.296 [2024-04-17 08:42:03.349790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:80296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.296 [2024-04-17 08:42:03.349796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.296 [2024-04-17 08:42:03.349806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.296 [2024-04-17 08:42:03.349813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.296 [2024-04-17 08:42:03.349821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.296 [2024-04-17 08:42:03.349829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.296 [2024-04-17 08:42:03.349838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:56480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.296 [2024-04-17 08:42:03.349844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.296 [2024-04-17 08:42:03.349855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.296 [2024-04-17 08:42:03.349862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.296 [2024-04-17 08:42:03.349872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:66776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:45:30.296 [2024-04-17 08:42:03.349879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.296 [2024-04-17 08:42:03.349906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:45:30.296 [2024-04-17 08:42:03.349913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:45:30.296 [2024-04-17 08:42:03.349921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97888 len:8 PRP1 0x0 PRP2 0x0 00:45:30.296 [2024-04-17 08:42:03.349928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:30.296 [2024-04-17 08:42:03.349981] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2208420 was disconnected and freed. reset controller. 00:45:30.296 [2024-04-17 08:42:03.350265] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:45:30.296 [2024-04-17 08:42:03.350344] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c1170 (9): Bad file descriptor 00:45:30.296 [2024-04-17 08:42:03.350450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:45:30.296 [2024-04-17 08:42:03.350489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:45:30.296 [2024-04-17 08:42:03.350499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21c1170 with addr=10.0.0.2, port=4420 00:45:30.296 [2024-04-17 08:42:03.350509] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c1170 is same with the state(5) to be set 00:45:30.296 [2024-04-17 08:42:03.350523] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c1170 (9): Bad file descriptor 00:45:30.296 [2024-04-17 08:42:03.350537] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:45:30.296 [2024-04-17 08:42:03.350544] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:45:30.296 [2024-04-17 08:42:03.350554] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:45:30.296 [2024-04-17 08:42:03.350573] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:45:30.296 [2024-04-17 08:42:03.350582] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:45:30.296 08:42:03 -- host/timeout.sh@128 -- # wait 88259 00:45:32.278 [2024-04-17 08:42:05.347014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:45:32.278 [2024-04-17 08:42:05.347127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:45:32.278 [2024-04-17 08:42:05.347143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21c1170 with addr=10.0.0.2, port=4420 00:45:32.278 [2024-04-17 08:42:05.347156] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c1170 is same with the state(5) to be set 00:45:32.278 [2024-04-17 08:42:05.347181] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c1170 (9): Bad file descriptor 00:45:32.278 [2024-04-17 08:42:05.347207] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:45:32.278 [2024-04-17 08:42:05.347216] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:45:32.278 [2024-04-17 08:42:05.347226] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:45:32.278 [2024-04-17 08:42:05.347285] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:45:32.278 [2024-04-17 08:42:05.347304] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:45:34.194 [2024-04-17 08:42:07.343661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:45:34.194 [2024-04-17 08:42:07.343752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:45:34.194 [2024-04-17 08:42:07.343764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21c1170 with addr=10.0.0.2, port=4420 00:45:34.194 [2024-04-17 08:42:07.343774] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c1170 is same with the state(5) to be set 00:45:34.194 [2024-04-17 08:42:07.343794] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c1170 (9): Bad file descriptor 00:45:34.194 [2024-04-17 08:42:07.343811] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:45:34.194 [2024-04-17 08:42:07.343817] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:45:34.194 [2024-04-17 08:42:07.343824] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:45:34.194 [2024-04-17 08:42:07.343890] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:45:34.194 [2024-04-17 08:42:07.343903] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:45:36.096 [2024-04-17 08:42:09.340158] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:45:37.032 00:45:37.032 Latency(us) 00:45:37.032 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:37.032 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:45:37.032 NVMe0n1 : 8.15 2942.03 11.49 15.70 0.00 43337.07 2962.00 7033243.39 00:45:37.032 =================================================================================================================== 00:45:37.032 Total : 2942.03 11.49 15.70 0.00 43337.07 2962.00 7033243.39 00:45:37.032 0 00:45:37.290 08:42:10 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:45:37.290 Attaching 5 probes... 00:45:37.290 1345.089942: reset bdev controller NVMe0 00:45:37.290 1345.214176: reconnect bdev controller NVMe0 00:45:37.290 3341.684450: reconnect delay bdev controller NVMe0 00:45:37.290 3341.711343: reconnect bdev controller NVMe0 00:45:37.290 5338.327657: reconnect delay bdev controller NVMe0 00:45:37.290 5338.371070: reconnect bdev controller NVMe0 00:45:37.290 7334.935996: reconnect delay bdev controller NVMe0 00:45:37.290 7334.963238: reconnect bdev controller NVMe0 00:45:37.290 08:42:10 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:45:37.290 08:42:10 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:45:37.290 08:42:10 -- host/timeout.sh@136 -- # kill 88200 00:45:37.290 08:42:10 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:45:37.290 08:42:10 -- host/timeout.sh@139 -- # killprocess 88172 00:45:37.290 08:42:10 -- common/autotest_common.sh@926 -- # '[' -z 88172 ']' 00:45:37.290 08:42:10 -- common/autotest_common.sh@930 -- # kill -0 88172 00:45:37.290 08:42:10 -- common/autotest_common.sh@931 -- # uname 00:45:37.290 08:42:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:45:37.290 08:42:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88172 00:45:37.290 08:42:10 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:45:37.290 08:42:10 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:45:37.290 killing process with pid 88172 00:45:37.290 08:42:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88172' 00:45:37.290 08:42:10 -- common/autotest_common.sh@945 -- # kill 88172 00:45:37.290 Received shutdown signal, test time was about 8.233465 seconds 00:45:37.290 00:45:37.290 Latency(us) 00:45:37.290 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:37.290 =================================================================================================================== 00:45:37.290 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:37.290 08:42:10 -- common/autotest_common.sh@950 -- # wait 88172 00:45:37.548 08:42:10 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:37.548 08:42:10 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:45:37.548 08:42:10 -- host/timeout.sh@145 -- # nvmftestfini 00:45:37.548 08:42:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:45:37.548 08:42:10 -- nvmf/common.sh@116 -- # sync 00:45:37.548 08:42:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:45:37.548 08:42:10 -- nvmf/common.sh@119 -- # set +e 00:45:37.548 08:42:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:45:37.548 08:42:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:45:37.548 rmmod nvme_tcp 00:45:37.548 rmmod nvme_fabrics 00:45:37.806 rmmod nvme_keyring 00:45:37.806 08:42:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:45:37.806 08:42:10 -- nvmf/common.sh@123 -- # set -e 00:45:37.806 08:42:10 -- nvmf/common.sh@124 -- # return 0 00:45:37.806 08:42:10 -- nvmf/common.sh@477 -- # '[' -n 87590 ']' 00:45:37.806 08:42:10 -- nvmf/common.sh@478 -- # killprocess 87590 00:45:37.806 08:42:10 -- common/autotest_common.sh@926 -- # '[' -z 87590 ']' 00:45:37.806 08:42:10 -- common/autotest_common.sh@930 -- # kill -0 87590 00:45:37.806 08:42:10 -- common/autotest_common.sh@931 -- # uname 00:45:37.806 08:42:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:45:37.806 08:42:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87590 00:45:37.806 08:42:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:45:37.806 08:42:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:45:37.806 08:42:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87590' 00:45:37.806 killing process with pid 87590 00:45:37.806 08:42:10 -- common/autotest_common.sh@945 -- # kill 87590 00:45:37.806 08:42:10 -- common/autotest_common.sh@950 -- # wait 87590 00:45:38.064 08:42:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:45:38.064 08:42:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:45:38.064 08:42:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:45:38.064 08:42:11 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:45:38.064 08:42:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:45:38.064 08:42:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:38.064 08:42:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:45:38.064 08:42:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:38.064 08:42:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:45:38.064 00:45:38.064 real 0m46.792s 00:45:38.064 user 2m17.717s 00:45:38.064 sys 0m4.874s 00:45:38.064 08:42:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:38.064 08:42:11 -- common/autotest_common.sh@10 -- # set +x 00:45:38.064 ************************************ 00:45:38.064 END TEST nvmf_timeout 00:45:38.064 ************************************ 00:45:38.064 08:42:11 -- nvmf/nvmf.sh@119 -- # [[ virt == phy ]] 00:45:38.064 08:42:11 -- nvmf/nvmf.sh@126 -- # timing_exit host 00:45:38.064 08:42:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:45:38.064 08:42:11 -- common/autotest_common.sh@10 -- # set +x 00:45:38.064 08:42:11 -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:45:38.064 00:45:38.064 real 17m56.822s 00:45:38.064 user 57m23.618s 00:45:38.064 sys 3m11.648s 00:45:38.064 08:42:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:38.064 08:42:11 -- common/autotest_common.sh@10 -- # set +x 00:45:38.064 ************************************ 00:45:38.064 END TEST nvmf_tcp 00:45:38.064 ************************************ 00:45:38.064 08:42:11 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:45:38.064 08:42:11 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:45:38.064 08:42:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:45:38.064 08:42:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:45:38.064 08:42:11 -- common/autotest_common.sh@10 -- # set +x 00:45:38.064 ************************************ 00:45:38.064 START TEST spdkcli_nvmf_tcp 00:45:38.064 ************************************ 00:45:38.064 08:42:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:45:38.064 * Looking for test storage... 00:45:38.064 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:45:38.064 08:42:11 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:45:38.064 08:42:11 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:45:38.064 08:42:11 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:45:38.064 08:42:11 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:45:38.064 08:42:11 -- nvmf/common.sh@7 -- # uname -s 00:45:38.064 08:42:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:38.064 08:42:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:38.064 08:42:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:38.064 08:42:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:38.064 08:42:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:38.064 08:42:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:38.065 08:42:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:38.065 08:42:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:38.065 08:42:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:38.065 08:42:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:38.065 08:42:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:45:38.065 08:42:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:45:38.065 08:42:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:38.065 08:42:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:38.065 08:42:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:45:38.065 08:42:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:45:38.065 08:42:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:38.065 08:42:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:38.065 08:42:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:38.065 08:42:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:38.065 08:42:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:38.065 08:42:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:38.065 08:42:11 -- paths/export.sh@5 -- # export PATH 00:45:38.065 08:42:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:38.065 08:42:11 -- nvmf/common.sh@46 -- # : 0 00:45:38.065 08:42:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:45:38.065 08:42:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:45:38.065 08:42:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:45:38.065 08:42:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:38.065 08:42:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:38.065 08:42:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:45:38.065 08:42:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:45:38.065 08:42:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:45:38.065 08:42:11 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:45:38.065 08:42:11 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:45:38.065 08:42:11 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:45:38.065 08:42:11 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:45:38.065 08:42:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:45:38.065 08:42:11 -- common/autotest_common.sh@10 -- # set +x 00:45:38.065 08:42:11 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:45:38.065 08:42:11 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=88470 00:45:38.065 08:42:11 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:45:38.065 08:42:11 -- spdkcli/common.sh@34 -- # waitforlisten 88470 00:45:38.065 08:42:11 -- common/autotest_common.sh@819 -- # '[' -z 88470 ']' 00:45:38.065 08:42:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:38.065 08:42:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:45:38.065 08:42:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:38.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:38.065 08:42:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:45:38.065 08:42:11 -- common/autotest_common.sh@10 -- # set +x 00:45:38.323 [2024-04-17 08:42:11.427641] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:45:38.323 [2024-04-17 08:42:11.427828] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88470 ] 00:45:38.323 [2024-04-17 08:42:11.561883] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:45:38.582 [2024-04-17 08:42:11.671149] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:45:38.582 [2024-04-17 08:42:11.671429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:45:38.582 [2024-04-17 08:42:11.671477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:39.152 08:42:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:45:39.152 08:42:12 -- common/autotest_common.sh@852 -- # return 0 00:45:39.152 08:42:12 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:45:39.152 08:42:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:45:39.152 08:42:12 -- common/autotest_common.sh@10 -- # set +x 00:45:39.152 08:42:12 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:45:39.152 08:42:12 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:45:39.152 08:42:12 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:45:39.152 08:42:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:45:39.152 08:42:12 -- common/autotest_common.sh@10 -- # set +x 00:45:39.152 08:42:12 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:45:39.152 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:45:39.152 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:45:39.152 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:45:39.152 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:45:39.152 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:45:39.152 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:45:39.152 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:45:39.152 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:45:39.152 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:45:39.152 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:45:39.152 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:45:39.152 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:45:39.152 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:45:39.152 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:45:39.152 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:45:39.152 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:45:39.152 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:45:39.152 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:45:39.152 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:45:39.152 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:45:39.152 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:45:39.152 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:45:39.152 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:45:39.152 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:45:39.152 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:45:39.152 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:45:39.152 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:45:39.152 ' 00:45:39.721 [2024-04-17 08:42:12.748283] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:45:42.256 [2024-04-17 08:42:15.042389] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:43.193 [2024-04-17 08:42:16.381055] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:45:45.730 [2024-04-17 08:42:18.818335] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:45:47.634 [2024-04-17 08:42:20.922298] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:45:49.545 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:45:49.545 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:45:49.545 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:45:49.545 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:45:49.545 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:45:49.545 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:45:49.545 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:45:49.545 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:45:49.545 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:45:49.545 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:45:49.545 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:45:49.545 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:45:49.545 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:45:49.545 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:45:49.545 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:45:49.545 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:45:49.545 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:45:49.545 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:45:49.545 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:45:49.545 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:45:49.545 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:45:49.545 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:45:49.545 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:45:49.545 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:45:49.545 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:45:49.545 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:45:49.545 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:45:49.545 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:45:49.545 08:42:22 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:45:49.545 08:42:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:45:49.545 08:42:22 -- common/autotest_common.sh@10 -- # set +x 00:45:49.545 08:42:22 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:45:49.545 08:42:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:45:49.545 08:42:22 -- common/autotest_common.sh@10 -- # set +x 00:45:49.545 08:42:22 -- spdkcli/nvmf.sh@69 -- # check_match 00:45:49.545 08:42:22 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:45:49.803 08:42:23 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:45:49.803 08:42:23 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:45:50.063 08:42:23 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:45:50.063 08:42:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:45:50.063 08:42:23 -- common/autotest_common.sh@10 -- # set +x 00:45:50.063 08:42:23 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:45:50.063 08:42:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:45:50.063 08:42:23 -- common/autotest_common.sh@10 -- # set +x 00:45:50.063 08:42:23 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:45:50.063 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:45:50.063 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:45:50.063 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:45:50.063 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:45:50.063 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:45:50.063 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:45:50.063 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:45:50.063 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:45:50.063 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:45:50.063 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:45:50.063 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:45:50.063 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:45:50.063 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:45:50.063 ' 00:45:55.329 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:45:55.329 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:45:55.329 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:45:55.329 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:45:55.329 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:45:55.329 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:45:55.329 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:45:55.329 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:45:55.329 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:45:55.329 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:45:55.329 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:45:55.329 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:45:55.329 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:45:55.329 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:45:55.329 08:42:28 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:45:55.329 08:42:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:45:55.329 08:42:28 -- common/autotest_common.sh@10 -- # set +x 00:45:55.588 08:42:28 -- spdkcli/nvmf.sh@90 -- # killprocess 88470 00:45:55.588 08:42:28 -- common/autotest_common.sh@926 -- # '[' -z 88470 ']' 00:45:55.588 08:42:28 -- common/autotest_common.sh@930 -- # kill -0 88470 00:45:55.588 08:42:28 -- common/autotest_common.sh@931 -- # uname 00:45:55.588 08:42:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:45:55.588 08:42:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88470 00:45:55.588 08:42:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:45:55.588 08:42:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:45:55.588 08:42:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88470' 00:45:55.588 killing process with pid 88470 00:45:55.588 08:42:28 -- common/autotest_common.sh@945 -- # kill 88470 00:45:55.588 [2024-04-17 08:42:28.736547] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:45:55.588 08:42:28 -- common/autotest_common.sh@950 -- # wait 88470 00:45:55.847 08:42:29 -- spdkcli/nvmf.sh@1 -- # cleanup 00:45:55.847 08:42:29 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:45:55.847 08:42:29 -- spdkcli/common.sh@13 -- # '[' -n 88470 ']' 00:45:55.847 08:42:29 -- spdkcli/common.sh@14 -- # killprocess 88470 00:45:55.847 08:42:29 -- common/autotest_common.sh@926 -- # '[' -z 88470 ']' 00:45:55.847 08:42:29 -- common/autotest_common.sh@930 -- # kill -0 88470 00:45:55.847 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (88470) - No such process 00:45:55.847 08:42:29 -- common/autotest_common.sh@953 -- # echo 'Process with pid 88470 is not found' 00:45:55.847 Process with pid 88470 is not found 00:45:55.847 08:42:29 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:45:55.847 08:42:29 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:45:55.847 08:42:29 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:45:55.847 ************************************ 00:45:55.847 END TEST spdkcli_nvmf_tcp 00:45:55.847 ************************************ 00:45:55.847 00:45:55.847 real 0m17.809s 00:45:55.847 user 0m38.670s 00:45:55.847 sys 0m0.902s 00:45:55.847 08:42:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:55.847 08:42:29 -- common/autotest_common.sh@10 -- # set +x 00:45:56.107 08:42:29 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:45:56.107 08:42:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:45:56.107 08:42:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:45:56.107 08:42:29 -- common/autotest_common.sh@10 -- # set +x 00:45:56.107 ************************************ 00:45:56.107 START TEST nvmf_identify_passthru 00:45:56.107 ************************************ 00:45:56.107 08:42:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:45:56.107 * Looking for test storage... 00:45:56.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:45:56.107 08:42:29 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:45:56.107 08:42:29 -- nvmf/common.sh@7 -- # uname -s 00:45:56.107 08:42:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:56.107 08:42:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:56.107 08:42:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:56.107 08:42:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:56.107 08:42:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:56.107 08:42:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:56.107 08:42:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:56.107 08:42:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:56.107 08:42:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:56.107 08:42:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:56.107 08:42:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:45:56.107 08:42:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:45:56.107 08:42:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:56.107 08:42:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:56.107 08:42:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:45:56.107 08:42:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:45:56.107 08:42:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:56.107 08:42:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:56.107 08:42:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:56.107 08:42:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:56.107 08:42:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:56.107 08:42:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:56.107 08:42:29 -- paths/export.sh@5 -- # export PATH 00:45:56.107 08:42:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:56.107 08:42:29 -- nvmf/common.sh@46 -- # : 0 00:45:56.107 08:42:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:45:56.107 08:42:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:45:56.107 08:42:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:45:56.107 08:42:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:56.107 08:42:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:56.107 08:42:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:45:56.107 08:42:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:45:56.107 08:42:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:45:56.107 08:42:29 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:45:56.107 08:42:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:56.107 08:42:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:56.107 08:42:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:56.107 08:42:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:56.108 08:42:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:56.108 08:42:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:56.108 08:42:29 -- paths/export.sh@5 -- # export PATH 00:45:56.108 08:42:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:56.108 08:42:29 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:45:56.108 08:42:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:45:56.108 08:42:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:56.108 08:42:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:45:56.108 08:42:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:45:56.108 08:42:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:45:56.108 08:42:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:56.108 08:42:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:56.108 08:42:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:56.108 08:42:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:45:56.108 08:42:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:45:56.108 08:42:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:45:56.108 08:42:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:45:56.108 08:42:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:45:56.108 08:42:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:45:56.108 08:42:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:56.108 08:42:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:56.108 08:42:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:45:56.108 08:42:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:45:56.108 08:42:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:45:56.108 08:42:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:45:56.108 08:42:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:45:56.108 08:42:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:56.108 08:42:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:45:56.108 08:42:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:45:56.108 08:42:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:45:56.108 08:42:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:45:56.108 08:42:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:45:56.108 08:42:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:45:56.108 Cannot find device "nvmf_tgt_br" 00:45:56.108 08:42:29 -- nvmf/common.sh@154 -- # true 00:45:56.108 08:42:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:45:56.367 Cannot find device "nvmf_tgt_br2" 00:45:56.367 08:42:29 -- nvmf/common.sh@155 -- # true 00:45:56.367 08:42:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:45:56.367 08:42:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:45:56.367 Cannot find device "nvmf_tgt_br" 00:45:56.367 08:42:29 -- nvmf/common.sh@157 -- # true 00:45:56.367 08:42:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:45:56.367 Cannot find device "nvmf_tgt_br2" 00:45:56.367 08:42:29 -- nvmf/common.sh@158 -- # true 00:45:56.367 08:42:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:45:56.367 08:42:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:45:56.367 08:42:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:45:56.367 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:56.367 08:42:29 -- nvmf/common.sh@161 -- # true 00:45:56.367 08:42:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:45:56.367 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:56.367 08:42:29 -- nvmf/common.sh@162 -- # true 00:45:56.367 08:42:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:45:56.367 08:42:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:45:56.367 08:42:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:45:56.367 08:42:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:45:56.367 08:42:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:45:56.367 08:42:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:45:56.367 08:42:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:45:56.367 08:42:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:45:56.367 08:42:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:45:56.367 08:42:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:45:56.367 08:42:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:45:56.367 08:42:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:45:56.367 08:42:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:45:56.367 08:42:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:45:56.367 08:42:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:45:56.367 08:42:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:45:56.367 08:42:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:45:56.367 08:42:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:45:56.367 08:42:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:45:56.367 08:42:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:45:56.367 08:42:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:45:56.367 08:42:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:45:56.626 08:42:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:45:56.626 08:42:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:45:56.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:56.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:45:56.626 00:45:56.626 --- 10.0.0.2 ping statistics --- 00:45:56.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:56.626 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:45:56.626 08:42:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:45:56.626 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:45:56.626 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:45:56.626 00:45:56.626 --- 10.0.0.3 ping statistics --- 00:45:56.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:56.626 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:45:56.626 08:42:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:45:56.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:56.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:45:56.626 00:45:56.626 --- 10.0.0.1 ping statistics --- 00:45:56.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:56.626 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:45:56.626 08:42:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:56.626 08:42:29 -- nvmf/common.sh@421 -- # return 0 00:45:56.626 08:42:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:45:56.626 08:42:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:56.626 08:42:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:45:56.626 08:42:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:45:56.626 08:42:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:56.626 08:42:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:45:56.626 08:42:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:45:56.626 08:42:29 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:45:56.626 08:42:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:45:56.626 08:42:29 -- common/autotest_common.sh@10 -- # set +x 00:45:56.626 08:42:29 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:45:56.626 08:42:29 -- common/autotest_common.sh@1509 -- # bdfs=() 00:45:56.626 08:42:29 -- common/autotest_common.sh@1509 -- # local bdfs 00:45:56.626 08:42:29 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:45:56.626 08:42:29 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:45:56.626 08:42:29 -- common/autotest_common.sh@1498 -- # bdfs=() 00:45:56.626 08:42:29 -- common/autotest_common.sh@1498 -- # local bdfs 00:45:56.626 08:42:29 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:45:56.626 08:42:29 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:45:56.626 08:42:29 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:45:56.626 08:42:29 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:45:56.626 08:42:29 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:45:56.626 08:42:29 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:45:56.626 08:42:29 -- target/identify_passthru.sh@16 -- # bdf=0000:00:06.0 00:45:56.626 08:42:29 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:06.0 ']' 00:45:56.626 08:42:29 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:45:56.626 08:42:29 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:45:56.626 08:42:29 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:45:56.886 08:42:30 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:45:56.886 08:42:30 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:45:56.886 08:42:30 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:45:56.886 08:42:30 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:45:57.146 08:42:30 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:45:57.146 08:42:30 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:45:57.146 08:42:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:45:57.146 08:42:30 -- common/autotest_common.sh@10 -- # set +x 00:45:57.146 08:42:30 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:45:57.146 08:42:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:45:57.146 08:42:30 -- common/autotest_common.sh@10 -- # set +x 00:45:57.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:57.146 08:42:30 -- target/identify_passthru.sh@31 -- # nvmfpid=88962 00:45:57.146 08:42:30 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:45:57.146 08:42:30 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:45:57.146 08:42:30 -- target/identify_passthru.sh@35 -- # waitforlisten 88962 00:45:57.146 08:42:30 -- common/autotest_common.sh@819 -- # '[' -z 88962 ']' 00:45:57.146 08:42:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:57.146 08:42:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:45:57.146 08:42:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:57.146 08:42:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:45:57.146 08:42:30 -- common/autotest_common.sh@10 -- # set +x 00:45:57.146 [2024-04-17 08:42:30.378729] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:45:57.146 [2024-04-17 08:42:30.378840] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:57.406 [2024-04-17 08:42:30.537670] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:45:57.406 [2024-04-17 08:42:30.692923] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:45:57.406 [2024-04-17 08:42:30.693191] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:57.406 [2024-04-17 08:42:30.693216] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:57.406 [2024-04-17 08:42:30.693261] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:57.406 [2024-04-17 08:42:30.693526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:45:57.406 [2024-04-17 08:42:30.693543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:45:57.406 [2024-04-17 08:42:30.693643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:57.406 [2024-04-17 08:42:30.693643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:45:57.972 08:42:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:45:57.972 08:42:31 -- common/autotest_common.sh@852 -- # return 0 00:45:57.972 08:42:31 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:45:57.972 08:42:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:57.972 08:42:31 -- common/autotest_common.sh@10 -- # set +x 00:45:57.972 08:42:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:57.972 08:42:31 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:45:57.972 08:42:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:57.972 08:42:31 -- common/autotest_common.sh@10 -- # set +x 00:45:58.231 [2024-04-17 08:42:31.439194] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:45:58.231 08:42:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:58.231 08:42:31 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:45:58.231 08:42:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:58.231 08:42:31 -- common/autotest_common.sh@10 -- # set +x 00:45:58.231 [2024-04-17 08:42:31.453427] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:58.231 08:42:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:58.231 08:42:31 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:45:58.231 08:42:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:45:58.231 08:42:31 -- common/autotest_common.sh@10 -- # set +x 00:45:58.231 08:42:31 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:45:58.231 08:42:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:58.231 08:42:31 -- common/autotest_common.sh@10 -- # set +x 00:45:58.520 Nvme0n1 00:45:58.520 08:42:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:58.520 08:42:31 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:45:58.520 08:42:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:58.520 08:42:31 -- common/autotest_common.sh@10 -- # set +x 00:45:58.520 08:42:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:58.520 08:42:31 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:45:58.520 08:42:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:58.520 08:42:31 -- common/autotest_common.sh@10 -- # set +x 00:45:58.520 08:42:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:58.520 08:42:31 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:58.520 08:42:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:58.520 08:42:31 -- common/autotest_common.sh@10 -- # set +x 00:45:58.520 [2024-04-17 08:42:31.621904] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:58.520 08:42:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:58.520 08:42:31 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:45:58.520 08:42:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:58.520 08:42:31 -- common/autotest_common.sh@10 -- # set +x 00:45:58.520 [2024-04-17 08:42:31.633629] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:45:58.520 [ 00:45:58.520 { 00:45:58.520 "allow_any_host": true, 00:45:58.520 "hosts": [], 00:45:58.520 "listen_addresses": [], 00:45:58.520 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:45:58.520 "subtype": "Discovery" 00:45:58.520 }, 00:45:58.520 { 00:45:58.520 "allow_any_host": true, 00:45:58.520 "hosts": [], 00:45:58.520 "listen_addresses": [ 00:45:58.520 { 00:45:58.520 "adrfam": "IPv4", 00:45:58.520 "traddr": "10.0.0.2", 00:45:58.520 "transport": "TCP", 00:45:58.520 "trsvcid": "4420", 00:45:58.520 "trtype": "TCP" 00:45:58.520 } 00:45:58.520 ], 00:45:58.520 "max_cntlid": 65519, 00:45:58.520 "max_namespaces": 1, 00:45:58.520 "min_cntlid": 1, 00:45:58.520 "model_number": "SPDK bdev Controller", 00:45:58.520 "namespaces": [ 00:45:58.520 { 00:45:58.520 "bdev_name": "Nvme0n1", 00:45:58.520 "name": "Nvme0n1", 00:45:58.520 "nguid": "ACB181B3263E4F718F98C60D5567686B", 00:45:58.520 "nsid": 1, 00:45:58.520 "uuid": "acb181b3-263e-4f71-8f98-c60d5567686b" 00:45:58.520 } 00:45:58.520 ], 00:45:58.520 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:45:58.520 "serial_number": "SPDK00000000000001", 00:45:58.520 "subtype": "NVMe" 00:45:58.520 } 00:45:58.520 ] 00:45:58.520 08:42:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:58.520 08:42:31 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:45:58.520 08:42:31 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:45:58.520 08:42:31 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:45:58.779 08:42:31 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:45:58.779 08:42:31 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:45:58.779 08:42:31 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:45:58.779 08:42:31 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:45:58.779 08:42:32 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:45:58.779 08:42:32 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:45:58.779 08:42:32 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:45:58.779 08:42:32 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:58.779 08:42:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:45:58.779 08:42:32 -- common/autotest_common.sh@10 -- # set +x 00:45:58.779 08:42:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:45:58.779 08:42:32 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:45:58.779 08:42:32 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:45:58.779 08:42:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:45:58.779 08:42:32 -- nvmf/common.sh@116 -- # sync 00:45:59.038 08:42:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:45:59.038 08:42:32 -- nvmf/common.sh@119 -- # set +e 00:45:59.038 08:42:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:45:59.038 08:42:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:45:59.038 rmmod nvme_tcp 00:45:59.038 rmmod nvme_fabrics 00:45:59.038 rmmod nvme_keyring 00:45:59.038 08:42:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:45:59.038 08:42:32 -- nvmf/common.sh@123 -- # set -e 00:45:59.038 08:42:32 -- nvmf/common.sh@124 -- # return 0 00:45:59.038 08:42:32 -- nvmf/common.sh@477 -- # '[' -n 88962 ']' 00:45:59.038 08:42:32 -- nvmf/common.sh@478 -- # killprocess 88962 00:45:59.038 08:42:32 -- common/autotest_common.sh@926 -- # '[' -z 88962 ']' 00:45:59.038 08:42:32 -- common/autotest_common.sh@930 -- # kill -0 88962 00:45:59.038 08:42:32 -- common/autotest_common.sh@931 -- # uname 00:45:59.038 08:42:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:45:59.038 08:42:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88962 00:45:59.298 08:42:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:45:59.299 08:42:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:45:59.299 killing process with pid 88962 00:45:59.299 08:42:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88962' 00:45:59.299 08:42:32 -- common/autotest_common.sh@945 -- # kill 88962 00:45:59.299 [2024-04-17 08:42:32.389738] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:45:59.299 08:42:32 -- common/autotest_common.sh@950 -- # wait 88962 00:45:59.299 08:42:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:45:59.299 08:42:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:45:59.299 08:42:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:45:59.299 08:42:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:45:59.299 08:42:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:45:59.299 08:42:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:59.299 08:42:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:59.299 08:42:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:59.559 08:42:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:45:59.559 00:45:59.559 real 0m3.484s 00:45:59.559 user 0m8.109s 00:45:59.559 sys 0m1.056s 00:45:59.559 08:42:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:59.559 08:42:32 -- common/autotest_common.sh@10 -- # set +x 00:45:59.559 ************************************ 00:45:59.559 END TEST nvmf_identify_passthru 00:45:59.559 ************************************ 00:45:59.559 08:42:32 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:45:59.559 08:42:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:45:59.559 08:42:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:45:59.559 08:42:32 -- common/autotest_common.sh@10 -- # set +x 00:45:59.559 ************************************ 00:45:59.559 START TEST nvmf_dif 00:45:59.559 ************************************ 00:45:59.559 08:42:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:45:59.559 * Looking for test storage... 00:45:59.559 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:45:59.559 08:42:32 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:45:59.559 08:42:32 -- nvmf/common.sh@7 -- # uname -s 00:45:59.559 08:42:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:59.559 08:42:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:59.559 08:42:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:59.559 08:42:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:59.559 08:42:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:59.559 08:42:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:59.559 08:42:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:59.559 08:42:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:59.559 08:42:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:59.559 08:42:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:59.559 08:42:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:45:59.559 08:42:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:45:59.559 08:42:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:59.559 08:42:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:59.559 08:42:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:45:59.559 08:42:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:45:59.559 08:42:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:59.559 08:42:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:59.559 08:42:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:59.559 08:42:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:59.559 08:42:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:59.559 08:42:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:59.559 08:42:32 -- paths/export.sh@5 -- # export PATH 00:45:59.818 08:42:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:59.818 08:42:32 -- nvmf/common.sh@46 -- # : 0 00:45:59.818 08:42:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:45:59.818 08:42:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:45:59.818 08:42:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:45:59.818 08:42:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:59.818 08:42:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:59.818 08:42:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:45:59.818 08:42:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:45:59.818 08:42:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:45:59.818 08:42:32 -- target/dif.sh@15 -- # NULL_META=16 00:45:59.818 08:42:32 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:45:59.818 08:42:32 -- target/dif.sh@15 -- # NULL_SIZE=64 00:45:59.818 08:42:32 -- target/dif.sh@15 -- # NULL_DIF=1 00:45:59.818 08:42:32 -- target/dif.sh@135 -- # nvmftestinit 00:45:59.818 08:42:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:45:59.818 08:42:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:59.818 08:42:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:45:59.818 08:42:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:45:59.818 08:42:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:45:59.818 08:42:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:59.818 08:42:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:59.818 08:42:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:59.818 08:42:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:45:59.818 08:42:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:45:59.818 08:42:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:45:59.818 08:42:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:45:59.818 08:42:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:45:59.818 08:42:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:45:59.818 08:42:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:59.818 08:42:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:59.818 08:42:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:45:59.818 08:42:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:45:59.818 08:42:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:45:59.818 08:42:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:45:59.818 08:42:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:45:59.818 08:42:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:59.818 08:42:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:45:59.818 08:42:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:45:59.818 08:42:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:45:59.818 08:42:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:45:59.818 08:42:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:45:59.818 08:42:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:45:59.818 Cannot find device "nvmf_tgt_br" 00:45:59.818 08:42:32 -- nvmf/common.sh@154 -- # true 00:45:59.818 08:42:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:45:59.818 Cannot find device "nvmf_tgt_br2" 00:45:59.818 08:42:32 -- nvmf/common.sh@155 -- # true 00:45:59.818 08:42:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:45:59.818 08:42:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:45:59.818 Cannot find device "nvmf_tgt_br" 00:45:59.818 08:42:32 -- nvmf/common.sh@157 -- # true 00:45:59.818 08:42:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:45:59.818 Cannot find device "nvmf_tgt_br2" 00:45:59.818 08:42:33 -- nvmf/common.sh@158 -- # true 00:45:59.818 08:42:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:45:59.818 08:42:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:45:59.818 08:42:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:45:59.818 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:59.818 08:42:33 -- nvmf/common.sh@161 -- # true 00:45:59.818 08:42:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:45:59.818 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:59.818 08:42:33 -- nvmf/common.sh@162 -- # true 00:45:59.818 08:42:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:45:59.818 08:42:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:45:59.818 08:42:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:45:59.818 08:42:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:45:59.818 08:42:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:45:59.818 08:42:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:45:59.818 08:42:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:45:59.818 08:42:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:46:00.077 08:42:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:46:00.077 08:42:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:46:00.077 08:42:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:46:00.077 08:42:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:46:00.077 08:42:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:46:00.077 08:42:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:46:00.077 08:42:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:46:00.077 08:42:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:46:00.077 08:42:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:46:00.077 08:42:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:46:00.077 08:42:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:46:00.077 08:42:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:46:00.077 08:42:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:46:00.077 08:42:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:46:00.077 08:42:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:46:00.077 08:42:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:46:00.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:00.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:46:00.077 00:46:00.077 --- 10.0.0.2 ping statistics --- 00:46:00.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:00.077 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:46:00.077 08:42:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:46:00.077 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:46:00.077 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.136 ms 00:46:00.077 00:46:00.077 --- 10.0.0.3 ping statistics --- 00:46:00.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:00.077 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:46:00.077 08:42:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:46:00.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:00.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:46:00.077 00:46:00.077 --- 10.0.0.1 ping statistics --- 00:46:00.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:00.077 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:46:00.077 08:42:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:00.077 08:42:33 -- nvmf/common.sh@421 -- # return 0 00:46:00.077 08:42:33 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:46:00.077 08:42:33 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:46:00.645 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:46:00.645 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:46:00.645 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:46:00.645 08:42:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:00.645 08:42:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:46:00.645 08:42:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:46:00.645 08:42:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:00.645 08:42:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:46:00.645 08:42:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:46:00.645 08:42:33 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:46:00.645 08:42:33 -- target/dif.sh@137 -- # nvmfappstart 00:46:00.645 08:42:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:46:00.645 08:42:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:46:00.645 08:42:33 -- common/autotest_common.sh@10 -- # set +x 00:46:00.645 08:42:33 -- nvmf/common.sh@469 -- # nvmfpid=89316 00:46:00.645 08:42:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:46:00.645 08:42:33 -- nvmf/common.sh@470 -- # waitforlisten 89316 00:46:00.645 08:42:33 -- common/autotest_common.sh@819 -- # '[' -z 89316 ']' 00:46:00.645 08:42:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:00.645 08:42:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:46:00.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:00.645 08:42:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:00.645 08:42:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:46:00.645 08:42:33 -- common/autotest_common.sh@10 -- # set +x 00:46:00.645 [2024-04-17 08:42:33.936735] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:46:00.645 [2024-04-17 08:42:33.936819] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:00.904 [2024-04-17 08:42:34.077888] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:00.904 [2024-04-17 08:42:34.188176] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:46:00.904 [2024-04-17 08:42:34.188313] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:00.904 [2024-04-17 08:42:34.188321] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:00.904 [2024-04-17 08:42:34.188327] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:00.904 [2024-04-17 08:42:34.188351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:01.850 08:42:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:46:01.850 08:42:34 -- common/autotest_common.sh@852 -- # return 0 00:46:01.850 08:42:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:46:01.850 08:42:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:46:01.850 08:42:34 -- common/autotest_common.sh@10 -- # set +x 00:46:01.850 08:42:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:01.850 08:42:34 -- target/dif.sh@139 -- # create_transport 00:46:01.850 08:42:34 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:46:01.850 08:42:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:01.850 08:42:34 -- common/autotest_common.sh@10 -- # set +x 00:46:01.850 [2024-04-17 08:42:34.872899] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:01.850 08:42:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:01.850 08:42:34 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:46:01.850 08:42:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:46:01.850 08:42:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:46:01.850 08:42:34 -- common/autotest_common.sh@10 -- # set +x 00:46:01.850 ************************************ 00:46:01.850 START TEST fio_dif_1_default 00:46:01.850 ************************************ 00:46:01.850 08:42:34 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:46:01.850 08:42:34 -- target/dif.sh@86 -- # create_subsystems 0 00:46:01.850 08:42:34 -- target/dif.sh@28 -- # local sub 00:46:01.850 08:42:34 -- target/dif.sh@30 -- # for sub in "$@" 00:46:01.850 08:42:34 -- target/dif.sh@31 -- # create_subsystem 0 00:46:01.850 08:42:34 -- target/dif.sh@18 -- # local sub_id=0 00:46:01.850 08:42:34 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:46:01.850 08:42:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:01.850 08:42:34 -- common/autotest_common.sh@10 -- # set +x 00:46:01.850 bdev_null0 00:46:01.850 08:42:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:01.850 08:42:34 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:46:01.850 08:42:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:01.850 08:42:34 -- common/autotest_common.sh@10 -- # set +x 00:46:01.850 08:42:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:01.850 08:42:34 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:46:01.850 08:42:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:01.850 08:42:34 -- common/autotest_common.sh@10 -- # set +x 00:46:01.850 08:42:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:01.850 08:42:34 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:46:01.850 08:42:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:01.850 08:42:34 -- common/autotest_common.sh@10 -- # set +x 00:46:01.850 [2024-04-17 08:42:34.932870] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:01.850 08:42:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:01.850 08:42:34 -- target/dif.sh@87 -- # fio /dev/fd/62 00:46:01.850 08:42:34 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:46:01.850 08:42:34 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:46:01.850 08:42:34 -- nvmf/common.sh@520 -- # config=() 00:46:01.850 08:42:34 -- nvmf/common.sh@520 -- # local subsystem config 00:46:01.850 08:42:34 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:46:01.850 08:42:34 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:01.850 08:42:34 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:46:01.850 { 00:46:01.850 "params": { 00:46:01.850 "name": "Nvme$subsystem", 00:46:01.850 "trtype": "$TEST_TRANSPORT", 00:46:01.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:01.850 "adrfam": "ipv4", 00:46:01.850 "trsvcid": "$NVMF_PORT", 00:46:01.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:01.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:01.850 "hdgst": ${hdgst:-false}, 00:46:01.850 "ddgst": ${ddgst:-false} 00:46:01.850 }, 00:46:01.850 "method": "bdev_nvme_attach_controller" 00:46:01.850 } 00:46:01.850 EOF 00:46:01.850 )") 00:46:01.850 08:42:34 -- target/dif.sh@82 -- # gen_fio_conf 00:46:01.850 08:42:34 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:01.850 08:42:34 -- target/dif.sh@54 -- # local file 00:46:01.850 08:42:34 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:46:01.850 08:42:34 -- target/dif.sh@56 -- # cat 00:46:01.850 08:42:34 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:01.850 08:42:34 -- common/autotest_common.sh@1318 -- # local sanitizers 00:46:01.850 08:42:34 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:46:01.850 08:42:34 -- common/autotest_common.sh@1320 -- # shift 00:46:01.850 08:42:34 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:46:01.850 08:42:34 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:46:01.850 08:42:34 -- nvmf/common.sh@542 -- # cat 00:46:01.850 08:42:34 -- target/dif.sh@72 -- # (( file = 1 )) 00:46:01.850 08:42:34 -- target/dif.sh@72 -- # (( file <= files )) 00:46:01.850 08:42:34 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:46:01.850 08:42:34 -- common/autotest_common.sh@1324 -- # grep libasan 00:46:01.851 08:42:34 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:46:01.851 08:42:34 -- nvmf/common.sh@544 -- # jq . 00:46:01.851 08:42:34 -- nvmf/common.sh@545 -- # IFS=, 00:46:01.851 08:42:34 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:46:01.851 "params": { 00:46:01.851 "name": "Nvme0", 00:46:01.851 "trtype": "tcp", 00:46:01.851 "traddr": "10.0.0.2", 00:46:01.851 "adrfam": "ipv4", 00:46:01.851 "trsvcid": "4420", 00:46:01.851 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:01.851 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:01.851 "hdgst": false, 00:46:01.851 "ddgst": false 00:46:01.851 }, 00:46:01.851 "method": "bdev_nvme_attach_controller" 00:46:01.851 }' 00:46:01.851 08:42:34 -- common/autotest_common.sh@1324 -- # asan_lib= 00:46:01.851 08:42:34 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:46:01.851 08:42:34 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:46:01.851 08:42:34 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:46:01.851 08:42:34 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:46:01.851 08:42:34 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:46:01.851 08:42:35 -- common/autotest_common.sh@1324 -- # asan_lib= 00:46:01.851 08:42:35 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:46:01.851 08:42:35 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:46:01.851 08:42:35 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:01.851 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:46:01.851 fio-3.35 00:46:01.851 Starting 1 thread 00:46:02.418 [2024-04-17 08:42:35.636635] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:46:02.418 [2024-04-17 08:42:35.636768] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:46:14.662 00:46:14.662 filename0: (groupid=0, jobs=1): err= 0: pid=89406: Wed Apr 17 08:42:45 2024 00:46:14.662 read: IOPS=1034, BW=4138KiB/s (4237kB/s)(40.5MiB/10035msec) 00:46:14.662 slat (nsec): min=6809, max=42105, avg=8127.23, stdev=3114.61 00:46:14.662 clat (usec): min=358, max=42510, avg=3843.55, stdev=11265.84 00:46:14.662 lat (usec): min=365, max=42518, avg=3851.68, stdev=11265.78 00:46:14.662 clat percentiles (usec): 00:46:14.662 | 1.00th=[ 388], 5.00th=[ 392], 10.00th=[ 396], 20.00th=[ 404], 00:46:14.662 | 30.00th=[ 408], 40.00th=[ 412], 50.00th=[ 416], 60.00th=[ 424], 00:46:14.662 | 70.00th=[ 429], 80.00th=[ 449], 90.00th=[ 570], 95.00th=[40633], 00:46:14.662 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[42206], 00:46:14.662 | 99.99th=[42730] 00:46:14.662 bw ( KiB/s): min= 2880, max= 5536, per=100.00%, avg=4150.40, stdev=759.68, samples=20 00:46:14.662 iops : min= 720, max= 1384, avg=1037.60, stdev=189.92, samples=20 00:46:14.662 lat (usec) : 500=88.71%, 750=2.80%, 1000=0.01% 00:46:14.662 lat (msec) : 4=0.04%, 50=8.44% 00:46:14.662 cpu : usr=94.22%, sys=5.00%, ctx=258, majf=0, minf=0 00:46:14.662 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:14.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:14.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:14.662 issued rwts: total=10380,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:14.662 latency : target=0, window=0, percentile=100.00%, depth=4 00:46:14.662 00:46:14.662 Run status group 0 (all jobs): 00:46:14.662 READ: bw=4138KiB/s (4237kB/s), 4138KiB/s-4138KiB/s (4237kB/s-4237kB/s), io=40.5MiB (42.5MB), run=10035-10035msec 00:46:14.662 08:42:46 -- target/dif.sh@88 -- # destroy_subsystems 0 00:46:14.662 08:42:46 -- target/dif.sh@43 -- # local sub 00:46:14.662 08:42:46 -- target/dif.sh@45 -- # for sub in "$@" 00:46:14.662 08:42:46 -- target/dif.sh@46 -- # destroy_subsystem 0 00:46:14.662 08:42:46 -- target/dif.sh@36 -- # local sub_id=0 00:46:14.662 08:42:46 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:46:14.662 08:42:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:14.662 08:42:46 -- common/autotest_common.sh@10 -- # set +x 00:46:14.662 08:42:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:14.662 08:42:46 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:46:14.662 08:42:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:14.662 08:42:46 -- common/autotest_common.sh@10 -- # set +x 00:46:14.662 08:42:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:14.662 00:46:14.662 real 0m11.230s 00:46:14.662 user 0m10.247s 00:46:14.662 sys 0m0.850s 00:46:14.662 08:42:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:14.662 08:42:46 -- common/autotest_common.sh@10 -- # set +x 00:46:14.662 ************************************ 00:46:14.663 END TEST fio_dif_1_default 00:46:14.663 ************************************ 00:46:14.663 08:42:46 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:46:14.663 08:42:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:46:14.663 08:42:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:46:14.663 08:42:46 -- common/autotest_common.sh@10 -- # set +x 00:46:14.663 ************************************ 00:46:14.663 START TEST fio_dif_1_multi_subsystems 00:46:14.663 ************************************ 00:46:14.663 08:42:46 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:46:14.663 08:42:46 -- target/dif.sh@92 -- # local files=1 00:46:14.663 08:42:46 -- target/dif.sh@94 -- # create_subsystems 0 1 00:46:14.663 08:42:46 -- target/dif.sh@28 -- # local sub 00:46:14.663 08:42:46 -- target/dif.sh@30 -- # for sub in "$@" 00:46:14.663 08:42:46 -- target/dif.sh@31 -- # create_subsystem 0 00:46:14.663 08:42:46 -- target/dif.sh@18 -- # local sub_id=0 00:46:14.663 08:42:46 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:46:14.663 08:42:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:14.663 08:42:46 -- common/autotest_common.sh@10 -- # set +x 00:46:14.663 bdev_null0 00:46:14.663 08:42:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:14.663 08:42:46 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:46:14.663 08:42:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:14.663 08:42:46 -- common/autotest_common.sh@10 -- # set +x 00:46:14.663 08:42:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:14.663 08:42:46 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:46:14.663 08:42:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:14.663 08:42:46 -- common/autotest_common.sh@10 -- # set +x 00:46:14.663 08:42:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:14.663 08:42:46 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:46:14.663 08:42:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:14.663 08:42:46 -- common/autotest_common.sh@10 -- # set +x 00:46:14.663 [2024-04-17 08:42:46.211456] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:14.663 08:42:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:14.663 08:42:46 -- target/dif.sh@30 -- # for sub in "$@" 00:46:14.663 08:42:46 -- target/dif.sh@31 -- # create_subsystem 1 00:46:14.663 08:42:46 -- target/dif.sh@18 -- # local sub_id=1 00:46:14.663 08:42:46 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:46:14.663 08:42:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:14.663 08:42:46 -- common/autotest_common.sh@10 -- # set +x 00:46:14.663 bdev_null1 00:46:14.663 08:42:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:14.663 08:42:46 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:46:14.663 08:42:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:14.663 08:42:46 -- common/autotest_common.sh@10 -- # set +x 00:46:14.663 08:42:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:14.663 08:42:46 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:46:14.663 08:42:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:14.663 08:42:46 -- common/autotest_common.sh@10 -- # set +x 00:46:14.663 08:42:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:14.663 08:42:46 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:14.663 08:42:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:14.663 08:42:46 -- common/autotest_common.sh@10 -- # set +x 00:46:14.663 08:42:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:14.663 08:42:46 -- target/dif.sh@95 -- # fio /dev/fd/62 00:46:14.663 08:42:46 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:14.663 08:42:46 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:46:14.663 08:42:46 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:14.663 08:42:46 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:46:14.663 08:42:46 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:46:14.663 08:42:46 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:14.663 08:42:46 -- common/autotest_common.sh@1318 -- # local sanitizers 00:46:14.663 08:42:46 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:46:14.663 08:42:46 -- nvmf/common.sh@520 -- # config=() 00:46:14.663 08:42:46 -- target/dif.sh@82 -- # gen_fio_conf 00:46:14.663 08:42:46 -- common/autotest_common.sh@1320 -- # shift 00:46:14.663 08:42:46 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:46:14.663 08:42:46 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:46:14.663 08:42:46 -- nvmf/common.sh@520 -- # local subsystem config 00:46:14.663 08:42:46 -- target/dif.sh@54 -- # local file 00:46:14.663 08:42:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:46:14.663 08:42:46 -- target/dif.sh@56 -- # cat 00:46:14.663 08:42:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:46:14.663 { 00:46:14.663 "params": { 00:46:14.663 "name": "Nvme$subsystem", 00:46:14.663 "trtype": "$TEST_TRANSPORT", 00:46:14.663 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:14.663 "adrfam": "ipv4", 00:46:14.663 "trsvcid": "$NVMF_PORT", 00:46:14.663 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:14.663 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:14.663 "hdgst": ${hdgst:-false}, 00:46:14.663 "ddgst": ${ddgst:-false} 00:46:14.663 }, 00:46:14.663 "method": "bdev_nvme_attach_controller" 00:46:14.663 } 00:46:14.663 EOF 00:46:14.663 )") 00:46:14.663 08:42:46 -- target/dif.sh@72 -- # (( file = 1 )) 00:46:14.663 08:42:46 -- nvmf/common.sh@542 -- # cat 00:46:14.663 08:42:46 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:46:14.663 08:42:46 -- target/dif.sh@72 -- # (( file <= files )) 00:46:14.663 08:42:46 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:46:14.663 08:42:46 -- target/dif.sh@73 -- # cat 00:46:14.663 08:42:46 -- common/autotest_common.sh@1324 -- # grep libasan 00:46:14.663 08:42:46 -- target/dif.sh@72 -- # (( file++ )) 00:46:14.663 08:42:46 -- target/dif.sh@72 -- # (( file <= files )) 00:46:14.663 08:42:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:46:14.663 08:42:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:46:14.663 { 00:46:14.663 "params": { 00:46:14.663 "name": "Nvme$subsystem", 00:46:14.663 "trtype": "$TEST_TRANSPORT", 00:46:14.663 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:14.663 "adrfam": "ipv4", 00:46:14.663 "trsvcid": "$NVMF_PORT", 00:46:14.663 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:14.663 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:14.663 "hdgst": ${hdgst:-false}, 00:46:14.663 "ddgst": ${ddgst:-false} 00:46:14.663 }, 00:46:14.663 "method": "bdev_nvme_attach_controller" 00:46:14.663 } 00:46:14.663 EOF 00:46:14.663 )") 00:46:14.663 08:42:46 -- nvmf/common.sh@542 -- # cat 00:46:14.663 08:42:46 -- nvmf/common.sh@544 -- # jq . 00:46:14.663 08:42:46 -- nvmf/common.sh@545 -- # IFS=, 00:46:14.663 08:42:46 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:46:14.663 "params": { 00:46:14.663 "name": "Nvme0", 00:46:14.663 "trtype": "tcp", 00:46:14.663 "traddr": "10.0.0.2", 00:46:14.663 "adrfam": "ipv4", 00:46:14.663 "trsvcid": "4420", 00:46:14.663 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:14.663 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:14.663 "hdgst": false, 00:46:14.663 "ddgst": false 00:46:14.663 }, 00:46:14.663 "method": "bdev_nvme_attach_controller" 00:46:14.663 },{ 00:46:14.663 "params": { 00:46:14.663 "name": "Nvme1", 00:46:14.663 "trtype": "tcp", 00:46:14.663 "traddr": "10.0.0.2", 00:46:14.663 "adrfam": "ipv4", 00:46:14.663 "trsvcid": "4420", 00:46:14.663 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:14.663 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:14.663 "hdgst": false, 00:46:14.663 "ddgst": false 00:46:14.663 }, 00:46:14.663 "method": "bdev_nvme_attach_controller" 00:46:14.663 }' 00:46:14.663 08:42:46 -- common/autotest_common.sh@1324 -- # asan_lib= 00:46:14.663 08:42:46 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:46:14.663 08:42:46 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:46:14.663 08:42:46 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:46:14.663 08:42:46 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:46:14.663 08:42:46 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:46:14.663 08:42:46 -- common/autotest_common.sh@1324 -- # asan_lib= 00:46:14.663 08:42:46 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:46:14.663 08:42:46 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:46:14.663 08:42:46 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:14.663 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:46:14.663 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:46:14.663 fio-3.35 00:46:14.663 Starting 2 threads 00:46:14.663 [2024-04-17 08:42:47.080161] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:46:14.663 [2024-04-17 08:42:47.080242] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:46:24.709 00:46:24.709 filename0: (groupid=0, jobs=1): err= 0: pid=89565: Wed Apr 17 08:42:57 2024 00:46:24.709 read: IOPS=259, BW=1039KiB/s (1064kB/s)(10.2MiB/10038msec) 00:46:24.709 slat (nsec): min=5735, max=53012, avg=10465.57, stdev=7408.57 00:46:24.709 clat (usec): min=328, max=41995, avg=15363.19, stdev=19496.58 00:46:24.709 lat (usec): min=334, max=42003, avg=15373.66, stdev=19496.08 00:46:24.709 clat percentiles (usec): 00:46:24.709 | 1.00th=[ 355], 5.00th=[ 367], 10.00th=[ 379], 20.00th=[ 412], 00:46:24.709 | 30.00th=[ 429], 40.00th=[ 449], 50.00th=[ 494], 60.00th=[ 807], 00:46:24.709 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:46:24.709 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:46:24.709 | 99.99th=[42206] 00:46:24.709 bw ( KiB/s): min= 672, max= 1984, per=25.44%, avg=1041.65, stdev=314.44, samples=20 00:46:24.709 iops : min= 168, max= 496, avg=260.40, stdev=78.62, samples=20 00:46:24.709 lat (usec) : 500=50.50%, 750=4.95%, 1000=7.44% 00:46:24.709 lat (msec) : 2=0.31%, 50=36.81% 00:46:24.709 cpu : usr=97.62%, sys=1.88%, ctx=25, majf=0, minf=9 00:46:24.709 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:24.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:24.709 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:24.709 issued rwts: total=2608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:24.709 latency : target=0, window=0, percentile=100.00%, depth=4 00:46:24.709 filename1: (groupid=0, jobs=1): err= 0: pid=89566: Wed Apr 17 08:42:57 2024 00:46:24.709 read: IOPS=765, BW=3062KiB/s (3136kB/s)(29.9MiB/10006msec) 00:46:24.709 slat (nsec): min=5837, max=80039, avg=10247.20, stdev=6679.63 00:46:24.709 clat (usec): min=305, max=42480, avg=5195.09, stdev=13013.10 00:46:24.709 lat (usec): min=311, max=42490, avg=5205.33, stdev=13012.55 00:46:24.709 clat percentiles (usec): 00:46:24.709 | 1.00th=[ 359], 5.00th=[ 371], 10.00th=[ 383], 20.00th=[ 400], 00:46:24.709 | 30.00th=[ 420], 40.00th=[ 433], 50.00th=[ 445], 60.00th=[ 457], 00:46:24.709 | 70.00th=[ 478], 80.00th=[ 570], 90.00th=[40633], 95.00th=[41157], 00:46:24.709 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42730], 00:46:24.709 | 99.99th=[42730] 00:46:24.709 bw ( KiB/s): min= 704, max= 7616, per=77.57%, avg=3174.32, stdev=1708.13, samples=19 00:46:24.709 iops : min= 176, max= 1904, avg=793.47, stdev=426.86, samples=19 00:46:24.709 lat (usec) : 500=74.87%, 750=10.38%, 1000=2.95% 00:46:24.709 lat (msec) : 2=0.10%, 50=11.70% 00:46:24.709 cpu : usr=98.04%, sys=1.32%, ctx=59, majf=0, minf=0 00:46:24.709 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:24.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:24.709 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:24.709 issued rwts: total=7660,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:24.709 latency : target=0, window=0, percentile=100.00%, depth=4 00:46:24.709 00:46:24.709 Run status group 0 (all jobs): 00:46:24.709 READ: bw=4092KiB/s (4190kB/s), 1039KiB/s-3062KiB/s (1064kB/s-3136kB/s), io=40.1MiB (42.1MB), run=10006-10038msec 00:46:24.709 08:42:57 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:46:24.709 08:42:57 -- target/dif.sh@43 -- # local sub 00:46:24.709 08:42:57 -- target/dif.sh@45 -- # for sub in "$@" 00:46:24.709 08:42:57 -- target/dif.sh@46 -- # destroy_subsystem 0 00:46:24.709 08:42:57 -- target/dif.sh@36 -- # local sub_id=0 00:46:24.709 08:42:57 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:46:24.709 08:42:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:24.709 08:42:57 -- common/autotest_common.sh@10 -- # set +x 00:46:24.709 08:42:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:24.709 08:42:57 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:46:24.709 08:42:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:24.709 08:42:57 -- common/autotest_common.sh@10 -- # set +x 00:46:24.709 08:42:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:24.709 08:42:57 -- target/dif.sh@45 -- # for sub in "$@" 00:46:24.709 08:42:57 -- target/dif.sh@46 -- # destroy_subsystem 1 00:46:24.709 08:42:57 -- target/dif.sh@36 -- # local sub_id=1 00:46:24.709 08:42:57 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:24.709 08:42:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:24.709 08:42:57 -- common/autotest_common.sh@10 -- # set +x 00:46:24.709 08:42:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:24.709 08:42:57 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:46:24.709 08:42:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:24.709 08:42:57 -- common/autotest_common.sh@10 -- # set +x 00:46:24.709 08:42:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:24.709 00:46:24.709 real 0m11.413s 00:46:24.709 user 0m20.574s 00:46:24.709 sys 0m0.669s 00:46:24.709 08:42:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:24.709 08:42:57 -- common/autotest_common.sh@10 -- # set +x 00:46:24.709 ************************************ 00:46:24.709 END TEST fio_dif_1_multi_subsystems 00:46:24.709 ************************************ 00:46:24.709 08:42:57 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:46:24.709 08:42:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:46:24.709 08:42:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:46:24.709 08:42:57 -- common/autotest_common.sh@10 -- # set +x 00:46:24.709 ************************************ 00:46:24.709 START TEST fio_dif_rand_params 00:46:24.709 ************************************ 00:46:24.709 08:42:57 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:46:24.709 08:42:57 -- target/dif.sh@100 -- # local NULL_DIF 00:46:24.709 08:42:57 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:46:24.709 08:42:57 -- target/dif.sh@103 -- # NULL_DIF=3 00:46:24.709 08:42:57 -- target/dif.sh@103 -- # bs=128k 00:46:24.709 08:42:57 -- target/dif.sh@103 -- # numjobs=3 00:46:24.709 08:42:57 -- target/dif.sh@103 -- # iodepth=3 00:46:24.709 08:42:57 -- target/dif.sh@103 -- # runtime=5 00:46:24.709 08:42:57 -- target/dif.sh@105 -- # create_subsystems 0 00:46:24.709 08:42:57 -- target/dif.sh@28 -- # local sub 00:46:24.709 08:42:57 -- target/dif.sh@30 -- # for sub in "$@" 00:46:24.709 08:42:57 -- target/dif.sh@31 -- # create_subsystem 0 00:46:24.709 08:42:57 -- target/dif.sh@18 -- # local sub_id=0 00:46:24.709 08:42:57 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:46:24.709 08:42:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:24.709 08:42:57 -- common/autotest_common.sh@10 -- # set +x 00:46:24.709 bdev_null0 00:46:24.709 08:42:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:24.709 08:42:57 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:46:24.709 08:42:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:24.709 08:42:57 -- common/autotest_common.sh@10 -- # set +x 00:46:24.709 08:42:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:24.709 08:42:57 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:46:24.709 08:42:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:24.709 08:42:57 -- common/autotest_common.sh@10 -- # set +x 00:46:24.709 08:42:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:24.709 08:42:57 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:46:24.709 08:42:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:24.709 08:42:57 -- common/autotest_common.sh@10 -- # set +x 00:46:24.709 [2024-04-17 08:42:57.705521] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:24.709 08:42:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:24.709 08:42:57 -- target/dif.sh@106 -- # fio /dev/fd/62 00:46:24.709 08:42:57 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:46:24.709 08:42:57 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:46:24.710 08:42:57 -- nvmf/common.sh@520 -- # config=() 00:46:24.710 08:42:57 -- nvmf/common.sh@520 -- # local subsystem config 00:46:24.710 08:42:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:46:24.710 08:42:57 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:24.710 08:42:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:46:24.710 { 00:46:24.710 "params": { 00:46:24.710 "name": "Nvme$subsystem", 00:46:24.710 "trtype": "$TEST_TRANSPORT", 00:46:24.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:24.710 "adrfam": "ipv4", 00:46:24.710 "trsvcid": "$NVMF_PORT", 00:46:24.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:24.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:24.710 "hdgst": ${hdgst:-false}, 00:46:24.710 "ddgst": ${ddgst:-false} 00:46:24.710 }, 00:46:24.710 "method": "bdev_nvme_attach_controller" 00:46:24.710 } 00:46:24.710 EOF 00:46:24.710 )") 00:46:24.710 08:42:57 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:24.710 08:42:57 -- target/dif.sh@82 -- # gen_fio_conf 00:46:24.710 08:42:57 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:46:24.710 08:42:57 -- target/dif.sh@54 -- # local file 00:46:24.710 08:42:57 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:24.710 08:42:57 -- common/autotest_common.sh@1318 -- # local sanitizers 00:46:24.710 08:42:57 -- target/dif.sh@56 -- # cat 00:46:24.710 08:42:57 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:46:24.710 08:42:57 -- common/autotest_common.sh@1320 -- # shift 00:46:24.710 08:42:57 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:46:24.710 08:42:57 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:46:24.710 08:42:57 -- nvmf/common.sh@542 -- # cat 00:46:24.710 08:42:57 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:46:24.710 08:42:57 -- common/autotest_common.sh@1324 -- # grep libasan 00:46:24.710 08:42:57 -- target/dif.sh@72 -- # (( file = 1 )) 00:46:24.710 08:42:57 -- target/dif.sh@72 -- # (( file <= files )) 00:46:24.710 08:42:57 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:46:24.710 08:42:57 -- nvmf/common.sh@544 -- # jq . 00:46:24.710 08:42:57 -- nvmf/common.sh@545 -- # IFS=, 00:46:24.710 08:42:57 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:46:24.710 "params": { 00:46:24.710 "name": "Nvme0", 00:46:24.710 "trtype": "tcp", 00:46:24.710 "traddr": "10.0.0.2", 00:46:24.710 "adrfam": "ipv4", 00:46:24.710 "trsvcid": "4420", 00:46:24.710 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:24.710 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:24.710 "hdgst": false, 00:46:24.710 "ddgst": false 00:46:24.710 }, 00:46:24.710 "method": "bdev_nvme_attach_controller" 00:46:24.710 }' 00:46:24.710 08:42:57 -- common/autotest_common.sh@1324 -- # asan_lib= 00:46:24.710 08:42:57 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:46:24.710 08:42:57 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:46:24.710 08:42:57 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:46:24.710 08:42:57 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:46:24.710 08:42:57 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:46:24.710 08:42:57 -- common/autotest_common.sh@1324 -- # asan_lib= 00:46:24.710 08:42:57 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:46:24.710 08:42:57 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:46:24.710 08:42:57 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:24.710 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:46:24.710 ... 00:46:24.710 fio-3.35 00:46:24.710 Starting 3 threads 00:46:25.278 [2024-04-17 08:42:58.358248] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:46:25.278 [2024-04-17 08:42:58.358308] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:46:30.552 00:46:30.552 filename0: (groupid=0, jobs=1): err= 0: pid=89724: Wed Apr 17 08:43:03 2024 00:46:30.552 read: IOPS=251, BW=31.5MiB/s (33.0MB/s)(158MiB/5004msec) 00:46:30.552 slat (nsec): min=6069, max=52052, avg=17929.41, stdev=8790.01 00:46:30.552 clat (usec): min=3696, max=52353, avg=11876.19, stdev=11547.31 00:46:30.552 lat (usec): min=3707, max=52380, avg=11894.12, stdev=11547.09 00:46:30.552 clat percentiles (usec): 00:46:30.552 | 1.00th=[ 4015], 5.00th=[ 6390], 10.00th=[ 6783], 20.00th=[ 7177], 00:46:30.552 | 30.00th=[ 7373], 40.00th=[ 7963], 50.00th=[ 8717], 60.00th=[ 9241], 00:46:30.552 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[10814], 95.00th=[49021], 00:46:30.552 | 99.00th=[50594], 99.50th=[51119], 99.90th=[52167], 99.95th=[52167], 00:46:30.552 | 99.99th=[52167] 00:46:30.552 bw ( KiB/s): min=27136, max=39936, per=31.29%, avg=32796.44, stdev=4876.52, samples=9 00:46:30.552 iops : min= 212, max= 312, avg=256.22, stdev=38.10, samples=9 00:46:30.552 lat (msec) : 4=0.79%, 10=77.40%, 20=13.24%, 50=6.03%, 100=2.54% 00:46:30.552 cpu : usr=94.82%, sys=3.76%, ctx=7, majf=0, minf=0 00:46:30.552 IO depths : 1=5.1%, 2=94.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:30.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:30.552 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:30.552 issued rwts: total=1261,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:30.552 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:30.552 filename0: (groupid=0, jobs=1): err= 0: pid=89725: Wed Apr 17 08:43:03 2024 00:46:30.552 read: IOPS=235, BW=29.4MiB/s (30.8MB/s)(148MiB/5027msec) 00:46:30.552 slat (nsec): min=6200, max=50655, avg=15233.92, stdev=8552.88 00:46:30.552 clat (usec): min=3603, max=52883, avg=12737.90, stdev=12211.58 00:46:30.552 lat (usec): min=3610, max=52922, avg=12753.13, stdev=12212.49 00:46:30.552 clat percentiles (usec): 00:46:30.552 | 1.00th=[ 3654], 5.00th=[ 5604], 10.00th=[ 6194], 20.00th=[ 6652], 00:46:30.552 | 30.00th=[ 6915], 40.00th=[ 7832], 50.00th=[ 9896], 60.00th=[10683], 00:46:30.552 | 70.00th=[11076], 80.00th=[11731], 90.00th=[12911], 95.00th=[50070], 00:46:30.552 | 99.00th=[51643], 99.50th=[52691], 99.90th=[52691], 99.95th=[52691], 00:46:30.552 | 99.99th=[52691] 00:46:30.552 bw ( KiB/s): min=26624, max=36864, per=29.39%, avg=30799.11, stdev=3115.69, samples=9 00:46:30.552 iops : min= 208, max= 288, avg=240.56, stdev=24.41, samples=9 00:46:30.552 lat (msec) : 4=1.61%, 10=49.32%, 20=39.68%, 50=3.81%, 100=5.58% 00:46:30.552 cpu : usr=96.56%, sys=2.37%, ctx=75, majf=0, minf=0 00:46:30.552 IO depths : 1=2.4%, 2=97.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:30.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:30.552 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:30.552 issued rwts: total=1182,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:30.552 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:30.552 filename0: (groupid=0, jobs=1): err= 0: pid=89726: Wed Apr 17 08:43:03 2024 00:46:30.552 read: IOPS=334, BW=41.8MiB/s (43.8MB/s)(209MiB/5002msec) 00:46:30.552 slat (nsec): min=6167, max=47627, avg=13089.99, stdev=8635.81 00:46:30.552 clat (usec): min=3237, max=54260, avg=8939.46, stdev=4608.66 00:46:30.552 lat (usec): min=3245, max=54269, avg=8952.55, stdev=4609.94 00:46:30.552 clat percentiles (usec): 00:46:30.552 | 1.00th=[ 3621], 5.00th=[ 3654], 10.00th=[ 3720], 20.00th=[ 7111], 00:46:30.552 | 30.00th=[ 7504], 40.00th=[ 7767], 50.00th=[ 8029], 60.00th=[ 8717], 00:46:30.552 | 70.00th=[11076], 80.00th=[11994], 90.00th=[12649], 95.00th=[13042], 00:46:30.552 | 99.00th=[14484], 99.50th=[48497], 99.90th=[54264], 99.95th=[54264], 00:46:30.552 | 99.99th=[54264] 00:46:30.552 bw ( KiB/s): min=35328, max=52992, per=40.14%, avg=42069.33, stdev=5640.72, samples=9 00:46:30.552 iops : min= 276, max= 414, avg=328.67, stdev=44.07, samples=9 00:46:30.552 lat (msec) : 4=14.47%, 10=51.05%, 20=33.77%, 50=0.36%, 100=0.36% 00:46:30.552 cpu : usr=96.62%, sys=2.16%, ctx=10, majf=0, minf=0 00:46:30.552 IO depths : 1=24.4%, 2=75.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:30.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:30.552 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:30.553 issued rwts: total=1673,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:30.553 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:30.553 00:46:30.553 Run status group 0 (all jobs): 00:46:30.553 READ: bw=102MiB/s (107MB/s), 29.4MiB/s-41.8MiB/s (30.8MB/s-43.8MB/s), io=515MiB (539MB), run=5002-5027msec 00:46:30.553 08:43:03 -- target/dif.sh@107 -- # destroy_subsystems 0 00:46:30.553 08:43:03 -- target/dif.sh@43 -- # local sub 00:46:30.553 08:43:03 -- target/dif.sh@45 -- # for sub in "$@" 00:46:30.553 08:43:03 -- target/dif.sh@46 -- # destroy_subsystem 0 00:46:30.553 08:43:03 -- target/dif.sh@36 -- # local sub_id=0 00:46:30.553 08:43:03 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:46:30.553 08:43:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:30.553 08:43:03 -- common/autotest_common.sh@10 -- # set +x 00:46:30.553 08:43:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:30.553 08:43:03 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:46:30.553 08:43:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:30.553 08:43:03 -- common/autotest_common.sh@10 -- # set +x 00:46:30.553 08:43:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:30.553 08:43:03 -- target/dif.sh@109 -- # NULL_DIF=2 00:46:30.553 08:43:03 -- target/dif.sh@109 -- # bs=4k 00:46:30.553 08:43:03 -- target/dif.sh@109 -- # numjobs=8 00:46:30.553 08:43:03 -- target/dif.sh@109 -- # iodepth=16 00:46:30.553 08:43:03 -- target/dif.sh@109 -- # runtime= 00:46:30.553 08:43:03 -- target/dif.sh@109 -- # files=2 00:46:30.553 08:43:03 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:46:30.553 08:43:03 -- target/dif.sh@28 -- # local sub 00:46:30.553 08:43:03 -- target/dif.sh@30 -- # for sub in "$@" 00:46:30.553 08:43:03 -- target/dif.sh@31 -- # create_subsystem 0 00:46:30.553 08:43:03 -- target/dif.sh@18 -- # local sub_id=0 00:46:30.553 08:43:03 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:46:30.553 08:43:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:30.553 08:43:03 -- common/autotest_common.sh@10 -- # set +x 00:46:30.553 bdev_null0 00:46:30.553 08:43:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:30.553 08:43:03 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:46:30.553 08:43:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:30.553 08:43:03 -- common/autotest_common.sh@10 -- # set +x 00:46:30.553 08:43:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:30.553 08:43:03 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:46:30.553 08:43:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:30.553 08:43:03 -- common/autotest_common.sh@10 -- # set +x 00:46:30.553 08:43:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:30.553 08:43:03 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:46:30.553 08:43:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:30.553 08:43:03 -- common/autotest_common.sh@10 -- # set +x 00:46:30.812 [2024-04-17 08:43:03.884828] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:30.812 08:43:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:30.812 08:43:03 -- target/dif.sh@30 -- # for sub in "$@" 00:46:30.812 08:43:03 -- target/dif.sh@31 -- # create_subsystem 1 00:46:30.812 08:43:03 -- target/dif.sh@18 -- # local sub_id=1 00:46:30.812 08:43:03 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:46:30.812 08:43:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:30.812 08:43:03 -- common/autotest_common.sh@10 -- # set +x 00:46:30.812 bdev_null1 00:46:30.812 08:43:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:30.812 08:43:03 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:46:30.812 08:43:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:30.812 08:43:03 -- common/autotest_common.sh@10 -- # set +x 00:46:30.812 08:43:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:30.812 08:43:03 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:46:30.812 08:43:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:30.812 08:43:03 -- common/autotest_common.sh@10 -- # set +x 00:46:30.812 08:43:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:30.812 08:43:03 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:30.812 08:43:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:30.812 08:43:03 -- common/autotest_common.sh@10 -- # set +x 00:46:30.812 08:43:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:30.812 08:43:03 -- target/dif.sh@30 -- # for sub in "$@" 00:46:30.812 08:43:03 -- target/dif.sh@31 -- # create_subsystem 2 00:46:30.812 08:43:03 -- target/dif.sh@18 -- # local sub_id=2 00:46:30.812 08:43:03 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:46:30.812 08:43:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:30.812 08:43:03 -- common/autotest_common.sh@10 -- # set +x 00:46:30.812 bdev_null2 00:46:30.812 08:43:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:30.812 08:43:03 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:46:30.812 08:43:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:30.812 08:43:03 -- common/autotest_common.sh@10 -- # set +x 00:46:30.812 08:43:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:30.812 08:43:03 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:46:30.812 08:43:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:30.812 08:43:03 -- common/autotest_common.sh@10 -- # set +x 00:46:30.812 08:43:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:30.812 08:43:03 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:46:30.812 08:43:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:30.812 08:43:03 -- common/autotest_common.sh@10 -- # set +x 00:46:30.812 08:43:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:30.812 08:43:03 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:46:30.812 08:43:03 -- target/dif.sh@112 -- # fio /dev/fd/62 00:46:30.812 08:43:03 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:46:30.812 08:43:03 -- nvmf/common.sh@520 -- # config=() 00:46:30.812 08:43:03 -- nvmf/common.sh@520 -- # local subsystem config 00:46:30.812 08:43:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:46:30.812 08:43:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:46:30.812 { 00:46:30.812 "params": { 00:46:30.812 "name": "Nvme$subsystem", 00:46:30.812 "trtype": "$TEST_TRANSPORT", 00:46:30.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:30.812 "adrfam": "ipv4", 00:46:30.812 "trsvcid": "$NVMF_PORT", 00:46:30.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:30.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:30.812 "hdgst": ${hdgst:-false}, 00:46:30.812 "ddgst": ${ddgst:-false} 00:46:30.812 }, 00:46:30.812 "method": "bdev_nvme_attach_controller" 00:46:30.812 } 00:46:30.812 EOF 00:46:30.812 )") 00:46:30.812 08:43:03 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:30.812 08:43:03 -- target/dif.sh@82 -- # gen_fio_conf 00:46:30.812 08:43:03 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:30.812 08:43:03 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:46:30.812 08:43:03 -- target/dif.sh@54 -- # local file 00:46:30.812 08:43:03 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:30.812 08:43:03 -- target/dif.sh@56 -- # cat 00:46:30.812 08:43:03 -- common/autotest_common.sh@1318 -- # local sanitizers 00:46:30.812 08:43:03 -- nvmf/common.sh@542 -- # cat 00:46:30.812 08:43:03 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:46:30.812 08:43:03 -- common/autotest_common.sh@1320 -- # shift 00:46:30.812 08:43:03 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:46:30.812 08:43:03 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:46:30.812 08:43:03 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:46:30.812 08:43:03 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:46:30.812 08:43:03 -- common/autotest_common.sh@1324 -- # grep libasan 00:46:30.812 08:43:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:46:30.812 08:43:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:46:30.812 { 00:46:30.812 "params": { 00:46:30.812 "name": "Nvme$subsystem", 00:46:30.812 "trtype": "$TEST_TRANSPORT", 00:46:30.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:30.812 "adrfam": "ipv4", 00:46:30.812 "trsvcid": "$NVMF_PORT", 00:46:30.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:30.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:30.812 "hdgst": ${hdgst:-false}, 00:46:30.812 "ddgst": ${ddgst:-false} 00:46:30.812 }, 00:46:30.812 "method": "bdev_nvme_attach_controller" 00:46:30.812 } 00:46:30.812 EOF 00:46:30.812 )") 00:46:30.812 08:43:03 -- target/dif.sh@72 -- # (( file = 1 )) 00:46:30.812 08:43:03 -- target/dif.sh@72 -- # (( file <= files )) 00:46:30.812 08:43:03 -- target/dif.sh@73 -- # cat 00:46:30.812 08:43:03 -- nvmf/common.sh@542 -- # cat 00:46:30.812 08:43:03 -- target/dif.sh@72 -- # (( file++ )) 00:46:30.812 08:43:03 -- target/dif.sh@72 -- # (( file <= files )) 00:46:30.812 08:43:03 -- target/dif.sh@73 -- # cat 00:46:30.812 08:43:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:46:30.812 08:43:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:46:30.812 { 00:46:30.812 "params": { 00:46:30.812 "name": "Nvme$subsystem", 00:46:30.812 "trtype": "$TEST_TRANSPORT", 00:46:30.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:30.812 "adrfam": "ipv4", 00:46:30.812 "trsvcid": "$NVMF_PORT", 00:46:30.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:30.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:30.812 "hdgst": ${hdgst:-false}, 00:46:30.812 "ddgst": ${ddgst:-false} 00:46:30.812 }, 00:46:30.812 "method": "bdev_nvme_attach_controller" 00:46:30.812 } 00:46:30.812 EOF 00:46:30.812 )") 00:46:30.812 08:43:04 -- target/dif.sh@72 -- # (( file++ )) 00:46:30.812 08:43:04 -- target/dif.sh@72 -- # (( file <= files )) 00:46:30.812 08:43:04 -- nvmf/common.sh@542 -- # cat 00:46:30.812 08:43:04 -- nvmf/common.sh@544 -- # jq . 00:46:30.812 08:43:04 -- nvmf/common.sh@545 -- # IFS=, 00:46:30.813 08:43:04 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:46:30.813 "params": { 00:46:30.813 "name": "Nvme0", 00:46:30.813 "trtype": "tcp", 00:46:30.813 "traddr": "10.0.0.2", 00:46:30.813 "adrfam": "ipv4", 00:46:30.813 "trsvcid": "4420", 00:46:30.813 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:30.813 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:30.813 "hdgst": false, 00:46:30.813 "ddgst": false 00:46:30.813 }, 00:46:30.813 "method": "bdev_nvme_attach_controller" 00:46:30.813 },{ 00:46:30.813 "params": { 00:46:30.813 "name": "Nvme1", 00:46:30.813 "trtype": "tcp", 00:46:30.813 "traddr": "10.0.0.2", 00:46:30.813 "adrfam": "ipv4", 00:46:30.813 "trsvcid": "4420", 00:46:30.813 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:30.813 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:30.813 "hdgst": false, 00:46:30.813 "ddgst": false 00:46:30.813 }, 00:46:30.813 "method": "bdev_nvme_attach_controller" 00:46:30.813 },{ 00:46:30.813 "params": { 00:46:30.813 "name": "Nvme2", 00:46:30.813 "trtype": "tcp", 00:46:30.813 "traddr": "10.0.0.2", 00:46:30.813 "adrfam": "ipv4", 00:46:30.813 "trsvcid": "4420", 00:46:30.813 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:46:30.813 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:46:30.813 "hdgst": false, 00:46:30.813 "ddgst": false 00:46:30.813 }, 00:46:30.813 "method": "bdev_nvme_attach_controller" 00:46:30.813 }' 00:46:30.813 08:43:04 -- common/autotest_common.sh@1324 -- # asan_lib= 00:46:30.813 08:43:04 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:46:30.813 08:43:04 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:46:30.813 08:43:04 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:46:30.813 08:43:04 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:46:30.813 08:43:04 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:46:30.813 08:43:04 -- common/autotest_common.sh@1324 -- # asan_lib= 00:46:30.813 08:43:04 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:46:30.813 08:43:04 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:46:30.813 08:43:04 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:31.071 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:46:31.071 ... 00:46:31.071 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:46:31.071 ... 00:46:31.071 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:46:31.071 ... 00:46:31.071 fio-3.35 00:46:31.071 Starting 24 threads 00:46:31.639 [2024-04-17 08:43:04.831933] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:46:31.639 [2024-04-17 08:43:04.831997] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:46:43.891 00:46:43.891 filename0: (groupid=0, jobs=1): err= 0: pid=89821: Wed Apr 17 08:43:16 2024 00:46:43.891 read: IOPS=564, BW=2259KiB/s (2313kB/s)(22.2MiB/10048msec) 00:46:43.891 slat (usec): min=5, max=11044, avg=16.83, stdev=188.59 00:46:43.891 clat (usec): min=812, max=426299, avg=28194.17, stdev=39531.62 00:46:43.891 lat (usec): min=820, max=426324, avg=28211.00, stdev=39532.90 00:46:43.891 clat percentiles (usec): 00:46:43.891 | 1.00th=[ 1385], 5.00th=[ 1762], 10.00th=[ 2704], 20.00th=[ 6194], 00:46:43.891 | 30.00th=[ 7373], 40.00th=[ 7963], 50.00th=[ 9503], 60.00th=[ 16057], 00:46:43.891 | 70.00th=[ 40633], 80.00th=[ 52167], 90.00th=[ 66323], 95.00th=[ 80217], 00:46:43.891 | 99.00th=[200279], 99.50th=[325059], 99.90th=[425722], 99.95th=[425722], 00:46:43.891 | 99.99th=[425722] 00:46:43.891 bw ( KiB/s): min= 128, max=11480, per=9.06%, avg=2263.20, stdev=2894.80, samples=20 00:46:43.891 iops : min= 32, max= 2870, avg=565.75, stdev=723.72, samples=20 00:46:43.891 lat (usec) : 1000=0.12% 00:46:43.891 lat (msec) : 2=5.83%, 4=9.82%, 10=35.41%, 20=10.36%, 50=17.84% 00:46:43.891 lat (msec) : 100=19.37%, 250=0.58%, 500=0.67% 00:46:43.891 cpu : usr=47.95%, sys=0.57%, ctx=1131, majf=0, minf=0 00:46:43.891 IO depths : 1=2.1%, 2=4.4%, 4=12.9%, 8=69.6%, 16=10.9%, 32=0.0%, >=64=0.0% 00:46:43.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.891 complete : 0=0.0%, 4=90.8%, 8=4.1%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.891 issued rwts: total=5674,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:43.891 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:43.891 filename0: (groupid=0, jobs=1): err= 0: pid=89822: Wed Apr 17 08:43:16 2024 00:46:43.891 read: IOPS=245, BW=981KiB/s (1005kB/s)(9824KiB/10013msec) 00:46:43.892 slat (usec): min=4, max=8040, avg=25.34, stdev=263.54 00:46:43.892 clat (msec): min=18, max=562, avg=65.06, stdev=54.19 00:46:43.892 lat (msec): min=18, max=562, avg=65.09, stdev=54.20 00:46:43.892 clat percentiles (msec): 00:46:43.892 | 1.00th=[ 23], 5.00th=[ 29], 10.00th=[ 32], 20.00th=[ 37], 00:46:43.892 | 30.00th=[ 44], 40.00th=[ 50], 50.00th=[ 57], 60.00th=[ 63], 00:46:43.892 | 70.00th=[ 72], 80.00th=[ 84], 90.00th=[ 94], 95.00th=[ 101], 00:46:43.892 | 99.00th=[ 359], 99.50th=[ 567], 99.90th=[ 567], 99.95th=[ 567], 00:46:43.892 | 99.99th=[ 567] 00:46:43.892 bw ( KiB/s): min= 128, max= 1600, per=3.75%, avg=935.26, stdev=363.56, samples=19 00:46:43.892 iops : min= 32, max= 400, avg=233.79, stdev=90.89, samples=19 00:46:43.892 lat (msec) : 20=0.41%, 50=41.94%, 100=52.57%, 250=3.54%, 500=0.90% 00:46:43.892 lat (msec) : 750=0.65% 00:46:43.892 cpu : usr=40.92%, sys=0.37%, ctx=1306, majf=0, minf=9 00:46:43.892 IO depths : 1=1.6%, 2=3.5%, 4=11.6%, 8=71.5%, 16=11.8%, 32=0.0%, >=64=0.0% 00:46:43.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.892 complete : 0=0.0%, 4=90.5%, 8=4.7%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.892 issued rwts: total=2456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:43.892 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:43.892 filename0: (groupid=0, jobs=1): err= 0: pid=89823: Wed Apr 17 08:43:16 2024 00:46:43.892 read: IOPS=215, BW=862KiB/s (883kB/s)(8636KiB/10019msec) 00:46:43.892 slat (usec): min=3, max=8041, avg=43.66, stdev=486.95 00:46:43.892 clat (msec): min=21, max=594, avg=73.88, stdev=60.10 00:46:43.892 lat (msec): min=21, max=594, avg=73.93, stdev=60.12 00:46:43.892 clat percentiles (msec): 00:46:43.892 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 48], 00:46:43.892 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 71], 00:46:43.892 | 70.00th=[ 78], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 118], 00:46:43.892 | 99.00th=[ 443], 99.50th=[ 592], 99.90th=[ 592], 99.95th=[ 592], 00:46:43.892 | 99.99th=[ 592] 00:46:43.892 bw ( KiB/s): min= 128, max= 1224, per=3.33%, avg=831.21, stdev=277.42, samples=19 00:46:43.892 iops : min= 32, max= 306, avg=207.79, stdev=69.35, samples=19 00:46:43.892 lat (msec) : 50=24.69%, 100=65.96%, 250=7.87%, 500=0.74%, 750=0.74% 00:46:43.892 cpu : usr=33.03%, sys=0.36%, ctx=914, majf=0, minf=9 00:46:43.892 IO depths : 1=1.7%, 2=4.1%, 4=13.1%, 8=69.4%, 16=11.7%, 32=0.0%, >=64=0.0% 00:46:43.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.892 complete : 0=0.0%, 4=90.9%, 8=4.4%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.892 issued rwts: total=2159,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:43.892 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:43.892 filename0: (groupid=0, jobs=1): err= 0: pid=89824: Wed Apr 17 08:43:16 2024 00:46:43.892 read: IOPS=218, BW=875KiB/s (896kB/s)(8760KiB/10017msec) 00:46:43.892 slat (usec): min=3, max=8065, avg=20.86, stdev=192.53 00:46:43.892 clat (msec): min=23, max=598, avg=73.01, stdev=60.17 00:46:43.892 lat (msec): min=23, max=598, avg=73.04, stdev=60.18 00:46:43.892 clat percentiles (msec): 00:46:43.892 | 1.00th=[ 27], 5.00th=[ 35], 10.00th=[ 41], 20.00th=[ 49], 00:46:43.892 | 30.00th=[ 55], 40.00th=[ 58], 50.00th=[ 62], 60.00th=[ 69], 00:46:43.892 | 70.00th=[ 77], 80.00th=[ 85], 90.00th=[ 103], 95.00th=[ 113], 00:46:43.892 | 99.00th=[ 451], 99.50th=[ 600], 99.90th=[ 600], 99.95th=[ 600], 00:46:43.892 | 99.99th=[ 600] 00:46:43.892 bw ( KiB/s): min= 128, max= 1536, per=3.48%, avg=868.75, stdev=298.86, samples=20 00:46:43.892 iops : min= 32, max= 384, avg=217.15, stdev=74.71, samples=20 00:46:43.892 lat (msec) : 50=21.69%, 100=67.99%, 250=8.86%, 500=0.73%, 750=0.73% 00:46:43.892 cpu : usr=40.62%, sys=0.25%, ctx=1205, majf=0, minf=9 00:46:43.892 IO depths : 1=3.2%, 2=7.0%, 4=18.0%, 8=62.1%, 16=9.7%, 32=0.0%, >=64=0.0% 00:46:43.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.892 complete : 0=0.0%, 4=92.0%, 8=2.6%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.892 issued rwts: total=2190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:43.892 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:43.892 filename0: (groupid=0, jobs=1): err= 0: pid=89825: Wed Apr 17 08:43:16 2024 00:46:43.892 read: IOPS=233, BW=933KiB/s (955kB/s)(9356KiB/10031msec) 00:46:43.892 slat (nsec): min=4017, max=94276, avg=14870.56, stdev=12167.08 00:46:43.892 clat (msec): min=21, max=426, avg=68.52, stdev=46.94 00:46:43.892 lat (msec): min=21, max=426, avg=68.54, stdev=46.94 00:46:43.892 clat percentiles (msec): 00:46:43.892 | 1.00th=[ 25], 5.00th=[ 33], 10.00th=[ 38], 20.00th=[ 46], 00:46:43.892 | 30.00th=[ 50], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 65], 00:46:43.892 | 70.00th=[ 71], 80.00th=[ 79], 90.00th=[ 96], 95.00th=[ 110], 00:46:43.892 | 99.00th=[ 359], 99.50th=[ 384], 99.90th=[ 426], 99.95th=[ 426], 00:46:43.892 | 99.99th=[ 426] 00:46:43.892 bw ( KiB/s): min= 208, max= 1256, per=3.60%, avg=898.89, stdev=292.35, samples=19 00:46:43.892 iops : min= 52, max= 314, avg=224.68, stdev=73.07, samples=19 00:46:43.892 lat (msec) : 50=31.30%, 100=60.28%, 250=6.37%, 500=2.05% 00:46:43.892 cpu : usr=43.06%, sys=0.40%, ctx=1327, majf=0, minf=9 00:46:43.892 IO depths : 1=1.2%, 2=2.6%, 4=9.4%, 8=74.1%, 16=12.7%, 32=0.0%, >=64=0.0% 00:46:43.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.892 complete : 0=0.0%, 4=90.3%, 8=5.3%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.892 issued rwts: total=2339,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:43.892 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:43.892 filename0: (groupid=0, jobs=1): err= 0: pid=89826: Wed Apr 17 08:43:16 2024 00:46:43.892 read: IOPS=222, BW=889KiB/s (911kB/s)(8928KiB/10039msec) 00:46:43.892 slat (usec): min=3, max=4042, avg=20.93, stdev=170.06 00:46:43.892 clat (msec): min=16, max=600, avg=71.70, stdev=59.18 00:46:43.892 lat (msec): min=16, max=600, avg=71.72, stdev=59.18 00:46:43.892 clat percentiles (msec): 00:46:43.892 | 1.00th=[ 31], 5.00th=[ 36], 10.00th=[ 42], 20.00th=[ 48], 00:46:43.892 | 30.00th=[ 53], 40.00th=[ 59], 50.00th=[ 62], 60.00th=[ 69], 00:46:43.892 | 70.00th=[ 72], 80.00th=[ 84], 90.00th=[ 99], 95.00th=[ 108], 00:46:43.892 | 99.00th=[ 443], 99.50th=[ 600], 99.90th=[ 600], 99.95th=[ 600], 00:46:43.892 | 99.99th=[ 600] 00:46:43.892 bw ( KiB/s): min= 128, max= 1376, per=3.55%, avg=887.90, stdev=339.44, samples=20 00:46:43.892 iops : min= 32, max= 344, avg=221.95, stdev=84.82, samples=20 00:46:43.892 lat (msec) : 20=0.27%, 50=25.99%, 100=65.95%, 250=6.36%, 500=0.72% 00:46:43.892 lat (msec) : 750=0.72% 00:46:43.892 cpu : usr=38.24%, sys=0.24%, ctx=986, majf=0, minf=9 00:46:43.892 IO depths : 1=2.2%, 2=4.9%, 4=14.3%, 8=67.7%, 16=10.9%, 32=0.0%, >=64=0.0% 00:46:43.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.892 complete : 0=0.0%, 4=91.2%, 8=3.7%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.892 issued rwts: total=2232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:43.892 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:43.892 filename0: (groupid=0, jobs=1): err= 0: pid=89827: Wed Apr 17 08:43:16 2024 00:46:43.892 read: IOPS=484, BW=1940KiB/s (1986kB/s)(19.0MiB/10031msec) 00:46:43.892 slat (usec): min=5, max=8020, avg=16.79, stdev=159.01 00:46:43.892 clat (usec): min=872, max=575958, avg=32841.03, stdev=45184.87 00:46:43.892 lat (usec): min=879, max=575966, avg=32857.81, stdev=45185.13 00:46:43.892 clat percentiles (usec): 00:46:43.892 | 1.00th=[ 1303], 5.00th=[ 2114], 10.00th=[ 5014], 20.00th=[ 6849], 00:46:43.892 | 30.00th=[ 7767], 40.00th=[ 9503], 50.00th=[ 14091], 60.00th=[ 23725], 00:46:43.892 | 70.00th=[ 45876], 80.00th=[ 58459], 90.00th=[ 72877], 95.00th=[ 95945], 00:46:43.892 | 99.00th=[225444], 99.50th=[346031], 99.90th=[434111], 99.95th=[574620], 00:46:43.892 | 99.99th=[574620] 00:46:43.892 bw ( KiB/s): min= 128, max= 8592, per=7.76%, avg=1936.90, stdev=2418.39, samples=20 00:46:43.892 iops : min= 32, max= 2148, avg=484.20, stdev=604.57, samples=20 00:46:43.892 lat (usec) : 1000=0.19% 00:46:43.892 lat (msec) : 2=4.54%, 4=4.50%, 10=32.42%, 20=17.19%, 50=15.79% 00:46:43.892 lat (msec) : 100=21.11%, 250=3.47%, 500=0.70%, 750=0.08% 00:46:43.892 cpu : usr=42.84%, sys=1.05%, ctx=1453, majf=0, minf=9 00:46:43.892 IO depths : 1=1.0%, 2=2.2%, 4=8.2%, 8=76.3%, 16=12.3%, 32=0.0%, >=64=0.0% 00:46:43.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.892 complete : 0=0.0%, 4=89.9%, 8=5.3%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.892 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:43.892 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:43.892 filename0: (groupid=0, jobs=1): err= 0: pid=89828: Wed Apr 17 08:43:16 2024 00:46:43.892 read: IOPS=471, BW=1888KiB/s (1933kB/s)(18.6MiB/10070msec) 00:46:43.892 slat (usec): min=6, max=8001, avg=22.91, stdev=222.19 00:46:43.892 clat (usec): min=857, max=428851, avg=33658.78, stdev=44193.66 00:46:43.892 lat (usec): min=872, max=428871, avg=33681.69, stdev=44190.26 00:46:43.892 clat percentiles (usec): 00:46:43.892 | 1.00th=[ 1565], 5.00th=[ 2540], 10.00th=[ 4047], 20.00th=[ 7046], 00:46:43.892 | 30.00th=[ 8094], 40.00th=[ 10683], 50.00th=[ 14222], 60.00th=[ 22676], 00:46:43.892 | 70.00th=[ 47973], 80.00th=[ 59507], 90.00th=[ 80217], 95.00th=[ 95945], 00:46:43.892 | 99.00th=[219153], 99.50th=[346031], 99.90th=[429917], 99.95th=[429917], 00:46:43.892 | 99.99th=[429917] 00:46:43.892 bw ( KiB/s): min= 128, max= 6800, per=7.58%, avg=1893.30, stdev=2194.45, samples=20 00:46:43.892 iops : min= 32, max= 1700, avg=473.30, stdev=548.59, samples=20 00:46:43.892 lat (usec) : 1000=0.02% 00:46:43.892 lat (msec) : 2=2.19%, 4=7.64%, 10=27.79%, 20=21.46%, 50=13.61% 00:46:43.892 lat (msec) : 100=23.25%, 250=3.24%, 500=0.80% 00:46:43.892 cpu : usr=37.79%, sys=0.31%, ctx=1051, majf=0, minf=9 00:46:43.892 IO depths : 1=1.5%, 2=3.3%, 4=10.7%, 8=72.6%, 16=12.0%, 32=0.0%, >=64=0.0% 00:46:43.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.892 complete : 0=0.0%, 4=90.4%, 8=4.9%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.892 issued rwts: total=4753,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:43.892 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:43.892 filename1: (groupid=0, jobs=1): err= 0: pid=89829: Wed Apr 17 08:43:16 2024 00:46:43.892 read: IOPS=215, BW=862KiB/s (883kB/s)(8620KiB/10001msec) 00:46:43.892 slat (usec): min=3, max=8058, avg=22.50, stdev=244.44 00:46:43.892 clat (msec): min=2, max=565, avg=74.08, stdev=58.04 00:46:43.892 lat (msec): min=2, max=565, avg=74.11, stdev=58.04 00:46:43.892 clat percentiles (msec): 00:46:43.892 | 1.00th=[ 7], 5.00th=[ 31], 10.00th=[ 39], 20.00th=[ 48], 00:46:43.892 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 72], 00:46:43.892 | 70.00th=[ 82], 80.00th=[ 86], 90.00th=[ 97], 95.00th=[ 121], 00:46:43.892 | 99.00th=[ 359], 99.50th=[ 567], 99.90th=[ 567], 99.95th=[ 567], 00:46:43.893 | 99.99th=[ 567] 00:46:43.893 bw ( KiB/s): min= 128, max= 1152, per=3.24%, avg=810.42, stdev=291.80, samples=19 00:46:43.893 iops : min= 32, max= 288, avg=202.58, stdev=72.95, samples=19 00:46:43.893 lat (msec) : 4=0.28%, 10=1.72%, 20=0.74%, 50=23.20%, 100=66.13% 00:46:43.893 lat (msec) : 250=5.80%, 500=1.39%, 750=0.74% 00:46:43.893 cpu : usr=35.47%, sys=0.27%, ctx=882, majf=0, minf=9 00:46:43.893 IO depths : 1=1.7%, 2=3.9%, 4=12.9%, 8=69.6%, 16=11.9%, 32=0.0%, >=64=0.0% 00:46:43.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.893 complete : 0=0.0%, 4=90.5%, 8=4.9%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.893 issued rwts: total=2155,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:43.893 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:43.893 filename1: (groupid=0, jobs=1): err= 0: pid=89830: Wed Apr 17 08:43:16 2024 00:46:43.893 read: IOPS=215, BW=861KiB/s (882kB/s)(8612KiB/10001msec) 00:46:43.893 slat (usec): min=5, max=8065, avg=23.20, stdev=253.42 00:46:43.893 clat (usec): min=1183, max=571502, avg=74157.49, stdev=58920.54 00:46:43.893 lat (usec): min=1189, max=571536, avg=74180.69, stdev=58920.80 00:46:43.893 clat percentiles (msec): 00:46:43.893 | 1.00th=[ 3], 5.00th=[ 23], 10.00th=[ 36], 20.00th=[ 49], 00:46:43.893 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 71], 00:46:43.893 | 70.00th=[ 77], 80.00th=[ 87], 90.00th=[ 108], 95.00th=[ 123], 00:46:43.893 | 99.00th=[ 359], 99.50th=[ 575], 99.90th=[ 575], 99.95th=[ 575], 00:46:43.893 | 99.99th=[ 575] 00:46:43.893 bw ( KiB/s): min= 128, max= 1184, per=3.15%, avg=786.84, stdev=271.68, samples=19 00:46:43.893 iops : min= 32, max= 296, avg=196.68, stdev=67.91, samples=19 00:46:43.893 lat (msec) : 2=0.74%, 4=1.35%, 10=1.63%, 20=0.74%, 50=19.04% 00:46:43.893 lat (msec) : 100=64.75%, 250=9.52%, 500=1.49%, 750=0.74% 00:46:43.893 cpu : usr=33.90%, sys=0.24%, ctx=937, majf=0, minf=9 00:46:43.893 IO depths : 1=2.6%, 2=5.8%, 4=15.5%, 8=65.7%, 16=10.5%, 32=0.0%, >=64=0.0% 00:46:43.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.893 complete : 0=0.0%, 4=91.6%, 8=3.2%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.893 issued rwts: total=2153,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:43.893 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:43.893 filename1: (groupid=0, jobs=1): err= 0: pid=89831: Wed Apr 17 08:43:16 2024 00:46:43.893 read: IOPS=214, BW=857KiB/s (878kB/s)(8580KiB/10007msec) 00:46:43.893 slat (usec): min=4, max=4023, avg=15.95, stdev=87.18 00:46:43.893 clat (msec): min=7, max=567, avg=74.52, stdev=55.85 00:46:43.893 lat (msec): min=7, max=567, avg=74.53, stdev=55.85 00:46:43.893 clat percentiles (msec): 00:46:43.893 | 1.00th=[ 11], 5.00th=[ 32], 10.00th=[ 40], 20.00th=[ 48], 00:46:43.893 | 30.00th=[ 56], 40.00th=[ 60], 50.00th=[ 66], 60.00th=[ 72], 00:46:43.893 | 70.00th=[ 82], 80.00th=[ 89], 90.00th=[ 107], 95.00th=[ 114], 00:46:43.893 | 99.00th=[ 347], 99.50th=[ 567], 99.90th=[ 567], 99.95th=[ 567], 00:46:43.893 | 99.99th=[ 567] 00:46:43.893 bw ( KiB/s): min= 128, max= 1120, per=3.24%, avg=808.89, stdev=271.72, samples=19 00:46:43.893 iops : min= 32, max= 280, avg=202.21, stdev=67.94, samples=19 00:46:43.893 lat (msec) : 10=0.93%, 20=1.68%, 50=22.38%, 100=62.98%, 250=10.07% 00:46:43.893 lat (msec) : 500=1.40%, 750=0.56% 00:46:43.893 cpu : usr=37.70%, sys=0.36%, ctx=994, majf=0, minf=9 00:46:43.893 IO depths : 1=2.0%, 2=4.9%, 4=14.8%, 8=66.8%, 16=11.5%, 32=0.0%, >=64=0.0% 00:46:43.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.893 complete : 0=0.0%, 4=91.5%, 8=3.7%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.893 issued rwts: total=2145,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:43.893 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:43.893 filename1: (groupid=0, jobs=1): err= 0: pid=89832: Wed Apr 17 08:43:16 2024 00:46:43.893 read: IOPS=222, BW=891KiB/s (913kB/s)(8916KiB/10003msec) 00:46:43.893 slat (usec): min=3, max=8048, avg=21.53, stdev=240.59 00:46:43.893 clat (msec): min=25, max=561, avg=71.65, stdev=55.90 00:46:43.893 lat (msec): min=25, max=561, avg=71.67, stdev=55.91 00:46:43.893 clat percentiles (msec): 00:46:43.893 | 1.00th=[ 27], 5.00th=[ 34], 10.00th=[ 38], 20.00th=[ 48], 00:46:43.893 | 30.00th=[ 51], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 70], 00:46:43.893 | 70.00th=[ 75], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 113], 00:46:43.893 | 99.00th=[ 363], 99.50th=[ 558], 99.90th=[ 558], 99.95th=[ 558], 00:46:43.893 | 99.99th=[ 558] 00:46:43.893 bw ( KiB/s): min= 128, max= 1208, per=3.42%, avg=854.89, stdev=297.39, samples=19 00:46:43.893 iops : min= 32, max= 302, avg=213.68, stdev=74.35, samples=19 00:46:43.893 lat (msec) : 50=29.70%, 100=61.64%, 250=6.51%, 500=1.44%, 750=0.72% 00:46:43.893 cpu : usr=34.93%, sys=0.27%, ctx=901, majf=0, minf=9 00:46:43.893 IO depths : 1=2.1%, 2=4.4%, 4=13.8%, 8=68.9%, 16=10.8%, 32=0.0%, >=64=0.0% 00:46:43.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.893 complete : 0=0.0%, 4=90.8%, 8=3.9%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.893 issued rwts: total=2229,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:43.893 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:43.893 filename1: (groupid=0, jobs=1): err= 0: pid=89833: Wed Apr 17 08:43:16 2024 00:46:43.893 read: IOPS=289, BW=1157KiB/s (1184kB/s)(11.3MiB/10015msec) 00:46:43.893 slat (usec): min=4, max=8072, avg=19.09, stdev=218.17 00:46:43.893 clat (usec): min=1560, max=370832, avg=55201.38, stdev=47336.48 00:46:43.893 lat (usec): min=1570, max=370849, avg=55220.47, stdev=47335.12 00:46:43.893 clat percentiles (msec): 00:46:43.893 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 23], 00:46:43.893 | 30.00th=[ 40], 40.00th=[ 46], 50.00th=[ 52], 60.00th=[ 58], 00:46:43.893 | 70.00th=[ 63], 80.00th=[ 72], 90.00th=[ 89], 95.00th=[ 100], 00:46:43.893 | 99.00th=[ 347], 99.50th=[ 372], 99.90th=[ 372], 99.95th=[ 372], 00:46:43.893 | 99.99th=[ 372] 00:46:43.893 bw ( KiB/s): min= 128, max= 5064, per=4.61%, avg=1151.85, stdev=974.73, samples=20 00:46:43.893 iops : min= 32, max= 1266, avg=287.95, stdev=243.68, samples=20 00:46:43.893 lat (msec) : 2=0.97%, 4=0.76%, 10=10.01%, 20=5.32%, 50=31.22% 00:46:43.893 lat (msec) : 100=47.62%, 250=2.45%, 500=1.66% 00:46:43.893 cpu : usr=45.20%, sys=0.45%, ctx=1112, majf=0, minf=9 00:46:43.893 IO depths : 1=2.0%, 2=4.2%, 4=12.1%, 8=70.4%, 16=11.2%, 32=0.0%, >=64=0.0% 00:46:43.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.893 complete : 0=0.0%, 4=90.7%, 8=4.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.893 issued rwts: total=2896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:43.893 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:43.893 filename1: (groupid=0, jobs=1): err= 0: pid=89834: Wed Apr 17 08:43:16 2024 00:46:43.893 read: IOPS=232, BW=931KiB/s (953kB/s)(9324KiB/10018msec) 00:46:43.893 slat (usec): min=4, max=8038, avg=18.54, stdev=186.06 00:46:43.893 clat (msec): min=16, max=606, avg=68.59, stdev=58.21 00:46:43.893 lat (msec): min=16, max=606, avg=68.61, stdev=58.24 00:46:43.893 clat percentiles (msec): 00:46:43.893 | 1.00th=[ 24], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 46], 00:46:43.893 | 30.00th=[ 51], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 65], 00:46:43.893 | 70.00th=[ 72], 80.00th=[ 84], 90.00th=[ 92], 95.00th=[ 106], 00:46:43.893 | 99.00th=[ 443], 99.50th=[ 600], 99.90th=[ 609], 99.95th=[ 609], 00:46:43.893 | 99.99th=[ 609] 00:46:43.893 bw ( KiB/s): min= 128, max= 1376, per=3.59%, avg=895.16, stdev=320.02, samples=19 00:46:43.893 iops : min= 32, max= 344, avg=223.79, stdev=80.01, samples=19 00:46:43.893 lat (msec) : 20=0.51%, 50=29.30%, 100=64.26%, 250=4.55%, 500=0.69% 00:46:43.893 lat (msec) : 750=0.69% 00:46:43.893 cpu : usr=36.40%, sys=0.33%, ctx=1089, majf=0, minf=9 00:46:43.893 IO depths : 1=2.3%, 2=5.1%, 4=14.4%, 8=67.2%, 16=11.0%, 32=0.0%, >=64=0.0% 00:46:43.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.893 complete : 0=0.0%, 4=91.3%, 8=3.8%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.893 issued rwts: total=2331,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:43.893 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:43.893 filename1: (groupid=0, jobs=1): err= 0: pid=89835: Wed Apr 17 08:43:16 2024 00:46:43.893 read: IOPS=227, BW=910KiB/s (932kB/s)(9104KiB/10001msec) 00:46:43.893 slat (usec): min=6, max=8034, avg=23.33, stdev=238.98 00:46:43.893 clat (usec): min=1289, max=599417, avg=70116.02, stdev=60210.25 00:46:43.893 lat (usec): min=1296, max=599471, avg=70139.35, stdev=60240.41 00:46:43.893 clat percentiles (msec): 00:46:43.893 | 1.00th=[ 4], 5.00th=[ 25], 10.00th=[ 33], 20.00th=[ 46], 00:46:43.893 | 30.00th=[ 53], 40.00th=[ 59], 50.00th=[ 63], 60.00th=[ 68], 00:46:43.893 | 70.00th=[ 73], 80.00th=[ 85], 90.00th=[ 97], 95.00th=[ 117], 00:46:43.893 | 99.00th=[ 456], 99.50th=[ 592], 99.90th=[ 600], 99.95th=[ 600], 00:46:43.893 | 99.99th=[ 600] 00:46:43.893 bw ( KiB/s): min= 128, max= 1280, per=3.34%, avg=835.26, stdev=308.40, samples=19 00:46:43.893 iops : min= 32, max= 320, avg=208.79, stdev=77.09, samples=19 00:46:43.893 lat (msec) : 2=0.35%, 4=2.02%, 10=0.44%, 20=0.88%, 50=22.01% 00:46:43.893 lat (msec) : 100=67.66%, 250=5.23%, 500=0.70%, 750=0.70% 00:46:43.893 cpu : usr=41.72%, sys=0.43%, ctx=1238, majf=0, minf=9 00:46:43.893 IO depths : 1=1.4%, 2=4.6%, 4=14.6%, 8=67.6%, 16=11.8%, 32=0.0%, >=64=0.0% 00:46:43.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.893 complete : 0=0.0%, 4=91.6%, 8=3.6%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.893 issued rwts: total=2276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:43.893 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:43.893 filename1: (groupid=0, jobs=1): err= 0: pid=89836: Wed Apr 17 08:43:16 2024 00:46:43.893 read: IOPS=213, BW=855KiB/s (876kB/s)(8576KiB/10027msec) 00:46:43.893 slat (usec): min=5, max=8047, avg=29.57, stdev=346.22 00:46:43.893 clat (msec): min=23, max=598, avg=74.57, stdev=55.96 00:46:43.893 lat (msec): min=23, max=598, avg=74.60, stdev=55.96 00:46:43.893 clat percentiles (msec): 00:46:43.893 | 1.00th=[ 32], 5.00th=[ 36], 10.00th=[ 44], 20.00th=[ 48], 00:46:43.893 | 30.00th=[ 52], 40.00th=[ 60], 50.00th=[ 62], 60.00th=[ 71], 00:46:43.893 | 70.00th=[ 80], 80.00th=[ 89], 90.00th=[ 105], 95.00th=[ 128], 00:46:43.893 | 99.00th=[ 456], 99.50th=[ 468], 99.90th=[ 600], 99.95th=[ 600], 00:46:43.893 | 99.99th=[ 600] 00:46:43.893 bw ( KiB/s): min= 128, max= 1248, per=3.32%, avg=828.63, stdev=303.39, samples=19 00:46:43.893 iops : min= 32, max= 312, avg=207.16, stdev=75.85, samples=19 00:46:43.893 lat (msec) : 50=27.89%, 100=61.61%, 250=8.82%, 500=1.40%, 750=0.28% 00:46:43.893 cpu : usr=32.99%, sys=0.36%, ctx=915, majf=0, minf=9 00:46:43.893 IO depths : 1=1.4%, 2=3.4%, 4=11.4%, 8=71.6%, 16=12.2%, 32=0.0%, >=64=0.0% 00:46:43.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.893 complete : 0=0.0%, 4=90.5%, 8=5.0%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.893 issued rwts: total=2144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:43.894 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:43.894 filename2: (groupid=0, jobs=1): err= 0: pid=89837: Wed Apr 17 08:43:16 2024 00:46:43.894 read: IOPS=227, BW=909KiB/s (931kB/s)(9100KiB/10007msec) 00:46:43.894 slat (usec): min=3, max=8037, avg=28.23, stdev=279.16 00:46:43.894 clat (msec): min=6, max=564, avg=70.19, stdev=56.09 00:46:43.894 lat (msec): min=6, max=564, avg=70.22, stdev=56.09 00:46:43.894 clat percentiles (msec): 00:46:43.894 | 1.00th=[ 11], 5.00th=[ 31], 10.00th=[ 37], 20.00th=[ 45], 00:46:43.894 | 30.00th=[ 48], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 66], 00:46:43.894 | 70.00th=[ 72], 80.00th=[ 84], 90.00th=[ 103], 95.00th=[ 116], 00:46:43.894 | 99.00th=[ 355], 99.50th=[ 567], 99.90th=[ 567], 99.95th=[ 567], 00:46:43.894 | 99.99th=[ 567] 00:46:43.894 bw ( KiB/s): min= 128, max= 1584, per=3.43%, avg=856.84, stdev=334.18, samples=19 00:46:43.894 iops : min= 32, max= 396, avg=214.21, stdev=83.55, samples=19 00:46:43.894 lat (msec) : 10=0.70%, 20=0.70%, 50=30.68%, 100=57.89%, 250=8.13% 00:46:43.894 lat (msec) : 500=1.19%, 750=0.70% 00:46:43.894 cpu : usr=41.87%, sys=0.41%, ctx=1052, majf=0, minf=9 00:46:43.894 IO depths : 1=1.6%, 2=3.9%, 4=13.2%, 8=69.3%, 16=11.9%, 32=0.0%, >=64=0.0% 00:46:43.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.894 complete : 0=0.0%, 4=90.9%, 8=4.4%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.894 issued rwts: total=2275,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:43.894 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:43.894 filename2: (groupid=0, jobs=1): err= 0: pid=89838: Wed Apr 17 08:43:16 2024 00:46:43.894 read: IOPS=246, BW=986KiB/s (1009kB/s)(9888KiB/10033msec) 00:46:43.894 slat (usec): min=4, max=5028, avg=18.58, stdev=160.41 00:46:43.894 clat (msec): min=14, max=579, avg=64.79, stdev=48.19 00:46:43.894 lat (msec): min=14, max=579, avg=64.81, stdev=48.19 00:46:43.894 clat percentiles (msec): 00:46:43.894 | 1.00th=[ 17], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 41], 00:46:43.894 | 30.00th=[ 47], 40.00th=[ 53], 50.00th=[ 58], 60.00th=[ 64], 00:46:43.894 | 70.00th=[ 67], 80.00th=[ 78], 90.00th=[ 89], 95.00th=[ 102], 00:46:43.894 | 99.00th=[ 330], 99.50th=[ 426], 99.90th=[ 584], 99.95th=[ 584], 00:46:43.894 | 99.99th=[ 584] 00:46:43.894 bw ( KiB/s): min= 96, max= 1536, per=3.93%, avg=982.00, stdev=338.74, samples=20 00:46:43.894 iops : min= 24, max= 384, avg=245.45, stdev=84.69, samples=20 00:46:43.894 lat (msec) : 20=1.29%, 50=36.29%, 100=57.16%, 250=3.72%, 500=1.38% 00:46:43.894 lat (msec) : 750=0.16% 00:46:43.894 cpu : usr=47.58%, sys=0.36%, ctx=1160, majf=0, minf=9 00:46:43.894 IO depths : 1=1.9%, 2=4.2%, 4=13.2%, 8=69.4%, 16=11.2%, 32=0.0%, >=64=0.0% 00:46:43.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.894 complete : 0=0.0%, 4=90.9%, 8=4.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.894 issued rwts: total=2472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:43.894 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:43.894 filename2: (groupid=0, jobs=1): err= 0: pid=89839: Wed Apr 17 08:43:16 2024 00:46:43.894 read: IOPS=211, BW=848KiB/s (868kB/s)(8500KiB/10025msec) 00:46:43.894 slat (usec): min=6, max=8023, avg=21.51, stdev=205.39 00:46:43.894 clat (msec): min=26, max=477, avg=75.33, stdev=54.62 00:46:43.894 lat (msec): min=26, max=477, avg=75.35, stdev=54.63 00:46:43.894 clat percentiles (msec): 00:46:43.894 | 1.00th=[ 31], 5.00th=[ 40], 10.00th=[ 46], 20.00th=[ 50], 00:46:43.894 | 30.00th=[ 57], 40.00th=[ 60], 50.00th=[ 63], 60.00th=[ 71], 00:46:43.894 | 70.00th=[ 82], 80.00th=[ 86], 90.00th=[ 100], 95.00th=[ 123], 00:46:43.894 | 99.00th=[ 451], 99.50th=[ 477], 99.90th=[ 477], 99.95th=[ 477], 00:46:43.894 | 99.99th=[ 477] 00:46:43.894 bw ( KiB/s): min= 128, max= 1192, per=3.28%, avg=819.37, stdev=301.33, samples=19 00:46:43.894 iops : min= 32, max= 298, avg=204.84, stdev=75.33, samples=19 00:46:43.894 lat (msec) : 50=22.49%, 100=67.72%, 250=7.72%, 500=2.07% 00:46:43.894 cpu : usr=37.10%, sys=0.33%, ctx=1084, majf=0, minf=9 00:46:43.894 IO depths : 1=1.0%, 2=2.6%, 4=9.5%, 8=73.2%, 16=13.7%, 32=0.0%, >=64=0.0% 00:46:43.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.894 complete : 0=0.0%, 4=90.4%, 8=6.0%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.894 issued rwts: total=2125,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:43.894 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:43.894 filename2: (groupid=0, jobs=1): err= 0: pid=89840: Wed Apr 17 08:43:16 2024 00:46:43.894 read: IOPS=222, BW=890KiB/s (911kB/s)(8904KiB/10006msec) 00:46:43.894 slat (usec): min=3, max=8019, avg=19.61, stdev=174.41 00:46:43.894 clat (msec): min=6, max=599, avg=71.79, stdev=55.96 00:46:43.894 lat (msec): min=6, max=599, avg=71.81, stdev=55.96 00:46:43.894 clat percentiles (msec): 00:46:43.894 | 1.00th=[ 19], 5.00th=[ 36], 10.00th=[ 42], 20.00th=[ 48], 00:46:43.894 | 30.00th=[ 54], 40.00th=[ 59], 50.00th=[ 64], 60.00th=[ 69], 00:46:43.894 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 95], 95.00th=[ 107], 00:46:43.894 | 99.00th=[ 451], 99.50th=[ 477], 99.90th=[ 600], 99.95th=[ 600], 00:46:43.894 | 99.99th=[ 600] 00:46:43.894 bw ( KiB/s): min= 128, max= 1168, per=3.42%, avg=853.89, stdev=305.75, samples=19 00:46:43.894 iops : min= 32, max= 292, avg=213.47, stdev=76.44, samples=19 00:46:43.894 lat (msec) : 10=0.99%, 20=0.27%, 50=22.87%, 100=69.23%, 250=5.03% 00:46:43.894 lat (msec) : 500=1.35%, 750=0.27% 00:46:43.894 cpu : usr=39.74%, sys=0.37%, ctx=1261, majf=0, minf=9 00:46:43.894 IO depths : 1=1.9%, 2=4.7%, 4=13.4%, 8=68.3%, 16=11.7%, 32=0.0%, >=64=0.0% 00:46:43.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.894 complete : 0=0.0%, 4=91.2%, 8=4.1%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.894 issued rwts: total=2226,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:43.894 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:43.894 filename2: (groupid=0, jobs=1): err= 0: pid=89841: Wed Apr 17 08:43:16 2024 00:46:43.894 read: IOPS=214, BW=856KiB/s (877kB/s)(8576KiB/10014msec) 00:46:43.894 slat (usec): min=3, max=8057, avg=20.92, stdev=205.34 00:46:43.894 clat (msec): min=24, max=479, avg=74.54, stdev=54.48 00:46:43.894 lat (msec): min=24, max=479, avg=74.56, stdev=54.48 00:46:43.894 clat percentiles (msec): 00:46:43.894 | 1.00th=[ 31], 5.00th=[ 36], 10.00th=[ 43], 20.00th=[ 50], 00:46:43.894 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 72], 00:46:43.894 | 70.00th=[ 77], 80.00th=[ 86], 90.00th=[ 97], 95.00th=[ 114], 00:46:43.894 | 99.00th=[ 443], 99.50th=[ 481], 99.90th=[ 481], 99.95th=[ 481], 00:46:43.894 | 99.99th=[ 481] 00:46:43.894 bw ( KiB/s): min= 128, max= 1392, per=3.43%, avg=856.45, stdev=297.39, samples=20 00:46:43.894 iops : min= 32, max= 348, avg=214.10, stdev=74.35, samples=20 00:46:43.894 lat (msec) : 50=22.29%, 100=68.33%, 250=7.42%, 500=1.96% 00:46:43.894 cpu : usr=34.50%, sys=0.37%, ctx=927, majf=0, minf=9 00:46:43.894 IO depths : 1=2.9%, 2=6.2%, 4=16.3%, 8=64.6%, 16=10.0%, 32=0.0%, >=64=0.0% 00:46:43.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.894 complete : 0=0.0%, 4=91.6%, 8=3.1%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.894 issued rwts: total=2144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:43.894 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:43.894 filename2: (groupid=0, jobs=1): err= 0: pid=89842: Wed Apr 17 08:43:16 2024 00:46:43.894 read: IOPS=212, BW=849KiB/s (870kB/s)(8496KiB/10004msec) 00:46:43.894 slat (usec): min=3, max=6042, avg=19.12, stdev=157.66 00:46:43.894 clat (msec): min=17, max=599, avg=75.20, stdev=61.00 00:46:43.894 lat (msec): min=17, max=599, avg=75.22, stdev=61.00 00:46:43.894 clat percentiles (msec): 00:46:43.894 | 1.00th=[ 24], 5.00th=[ 38], 10.00th=[ 41], 20.00th=[ 50], 00:46:43.894 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 66], 60.00th=[ 71], 00:46:43.894 | 70.00th=[ 79], 80.00th=[ 89], 90.00th=[ 97], 95.00th=[ 122], 00:46:43.894 | 99.00th=[ 447], 99.50th=[ 600], 99.90th=[ 600], 99.95th=[ 600], 00:46:43.894 | 99.99th=[ 600] 00:46:43.894 bw ( KiB/s): min= 128, max= 1224, per=3.23%, avg=806.74, stdev=302.16, samples=19 00:46:43.894 iops : min= 32, max= 306, avg=201.68, stdev=75.54, samples=19 00:46:43.894 lat (msec) : 20=0.47%, 50=22.36%, 100=67.51%, 250=8.15%, 500=0.75% 00:46:43.894 lat (msec) : 750=0.75% 00:46:43.894 cpu : usr=46.52%, sys=0.53%, ctx=1338, majf=0, minf=9 00:46:43.894 IO depths : 1=2.8%, 2=6.4%, 4=17.2%, 8=63.6%, 16=10.1%, 32=0.0%, >=64=0.0% 00:46:43.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.894 complete : 0=0.0%, 4=91.9%, 8=2.7%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.894 issued rwts: total=2124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:43.894 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:43.894 filename2: (groupid=0, jobs=1): err= 0: pid=89843: Wed Apr 17 08:43:16 2024 00:46:43.894 read: IOPS=228, BW=913KiB/s (935kB/s)(9132KiB/10001msec) 00:46:43.894 slat (usec): min=3, max=8037, avg=22.83, stdev=238.77 00:46:43.894 clat (msec): min=2, max=566, avg=69.97, stdev=56.80 00:46:43.894 lat (msec): min=2, max=566, avg=69.99, stdev=56.80 00:46:43.894 clat percentiles (msec): 00:46:43.894 | 1.00th=[ 6], 5.00th=[ 25], 10.00th=[ 35], 20.00th=[ 45], 00:46:43.894 | 30.00th=[ 51], 40.00th=[ 56], 50.00th=[ 61], 60.00th=[ 68], 00:46:43.894 | 70.00th=[ 77], 80.00th=[ 85], 90.00th=[ 99], 95.00th=[ 116], 00:46:43.894 | 99.00th=[ 355], 99.50th=[ 567], 99.90th=[ 567], 99.95th=[ 567], 00:46:43.894 | 99.99th=[ 567] 00:46:43.894 bw ( KiB/s): min= 128, max= 1280, per=3.39%, avg=846.63, stdev=310.71, samples=19 00:46:43.894 iops : min= 32, max= 320, avg=211.63, stdev=77.68, samples=19 00:46:43.894 lat (msec) : 4=0.79%, 10=2.01%, 20=0.88%, 50=25.45%, 100=63.12% 00:46:43.894 lat (msec) : 250=5.65%, 500=1.40%, 750=0.70% 00:46:43.894 cpu : usr=39.26%, sys=0.39%, ctx=1252, majf=0, minf=9 00:46:43.894 IO depths : 1=1.4%, 2=3.6%, 4=12.4%, 8=70.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:46:43.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.894 complete : 0=0.0%, 4=90.9%, 8=4.7%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.894 issued rwts: total=2283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:43.894 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:43.894 filename2: (groupid=0, jobs=1): err= 0: pid=89844: Wed Apr 17 08:43:16 2024 00:46:43.894 read: IOPS=220, BW=881KiB/s (902kB/s)(8812KiB/10004msec) 00:46:43.894 slat (usec): min=3, max=8074, avg=22.81, stdev=256.93 00:46:43.894 clat (msec): min=8, max=426, avg=72.48, stdev=48.42 00:46:43.894 lat (msec): min=8, max=426, avg=72.51, stdev=48.41 00:46:43.894 clat percentiles (msec): 00:46:43.894 | 1.00th=[ 11], 5.00th=[ 32], 10.00th=[ 36], 20.00th=[ 48], 00:46:43.894 | 30.00th=[ 55], 40.00th=[ 59], 50.00th=[ 63], 60.00th=[ 71], 00:46:43.894 | 70.00th=[ 82], 80.00th=[ 90], 90.00th=[ 99], 95.00th=[ 115], 00:46:43.894 | 99.00th=[ 326], 99.50th=[ 388], 99.90th=[ 426], 99.95th=[ 426], 00:46:43.894 | 99.99th=[ 426] 00:46:43.894 bw ( KiB/s): min= 176, max= 1285, per=3.34%, avg=835.21, stdev=280.42, samples=19 00:46:43.895 iops : min= 44, max= 321, avg=208.79, stdev=70.08, samples=19 00:46:43.895 lat (msec) : 10=0.73%, 20=0.73%, 50=24.97%, 100=64.96%, 250=6.90% 00:46:43.895 lat (msec) : 500=1.72% 00:46:43.895 cpu : usr=37.35%, sys=0.23%, ctx=997, majf=0, minf=9 00:46:43.895 IO depths : 1=1.3%, 2=3.1%, 4=10.7%, 8=72.6%, 16=12.3%, 32=0.0%, >=64=0.0% 00:46:43.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.895 complete : 0=0.0%, 4=90.3%, 8=5.2%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:43.895 issued rwts: total=2203,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:43.895 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:43.895 00:46:43.895 Run status group 0 (all jobs): 00:46:43.895 READ: bw=24.4MiB/s (25.6MB/s), 848KiB/s-2259KiB/s (868kB/s-2313kB/s), io=246MiB (257MB), run=10001-10070msec 00:46:43.895 08:43:17 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:46:43.895 08:43:17 -- target/dif.sh@43 -- # local sub 00:46:43.895 08:43:17 -- target/dif.sh@45 -- # for sub in "$@" 00:46:43.895 08:43:17 -- target/dif.sh@46 -- # destroy_subsystem 0 00:46:43.895 08:43:17 -- target/dif.sh@36 -- # local sub_id=0 00:46:43.895 08:43:17 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:46:43.895 08:43:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:43.895 08:43:17 -- common/autotest_common.sh@10 -- # set +x 00:46:43.895 08:43:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:43.895 08:43:17 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:46:43.895 08:43:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:43.895 08:43:17 -- common/autotest_common.sh@10 -- # set +x 00:46:43.895 08:43:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:43.895 08:43:17 -- target/dif.sh@45 -- # for sub in "$@" 00:46:43.895 08:43:17 -- target/dif.sh@46 -- # destroy_subsystem 1 00:46:43.895 08:43:17 -- target/dif.sh@36 -- # local sub_id=1 00:46:43.895 08:43:17 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:43.895 08:43:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:43.895 08:43:17 -- common/autotest_common.sh@10 -- # set +x 00:46:43.895 08:43:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:43.895 08:43:17 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:46:43.895 08:43:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:43.895 08:43:17 -- common/autotest_common.sh@10 -- # set +x 00:46:43.895 08:43:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:43.895 08:43:17 -- target/dif.sh@45 -- # for sub in "$@" 00:46:43.895 08:43:17 -- target/dif.sh@46 -- # destroy_subsystem 2 00:46:43.895 08:43:17 -- target/dif.sh@36 -- # local sub_id=2 00:46:43.895 08:43:17 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:46:43.895 08:43:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:43.895 08:43:17 -- common/autotest_common.sh@10 -- # set +x 00:46:43.895 08:43:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:43.895 08:43:17 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:46:43.895 08:43:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:43.895 08:43:17 -- common/autotest_common.sh@10 -- # set +x 00:46:43.895 08:43:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:43.895 08:43:17 -- target/dif.sh@115 -- # NULL_DIF=1 00:46:43.895 08:43:17 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:46:43.895 08:43:17 -- target/dif.sh@115 -- # numjobs=2 00:46:43.895 08:43:17 -- target/dif.sh@115 -- # iodepth=8 00:46:43.895 08:43:17 -- target/dif.sh@115 -- # runtime=5 00:46:43.895 08:43:17 -- target/dif.sh@115 -- # files=1 00:46:43.895 08:43:17 -- target/dif.sh@117 -- # create_subsystems 0 1 00:46:43.895 08:43:17 -- target/dif.sh@28 -- # local sub 00:46:43.895 08:43:17 -- target/dif.sh@30 -- # for sub in "$@" 00:46:43.895 08:43:17 -- target/dif.sh@31 -- # create_subsystem 0 00:46:43.895 08:43:17 -- target/dif.sh@18 -- # local sub_id=0 00:46:43.895 08:43:17 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:46:43.895 08:43:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:43.895 08:43:17 -- common/autotest_common.sh@10 -- # set +x 00:46:43.895 bdev_null0 00:46:43.895 08:43:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:43.895 08:43:17 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:46:43.895 08:43:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:43.895 08:43:17 -- common/autotest_common.sh@10 -- # set +x 00:46:43.895 08:43:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:43.895 08:43:17 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:46:43.895 08:43:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:43.895 08:43:17 -- common/autotest_common.sh@10 -- # set +x 00:46:43.895 08:43:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:43.895 08:43:17 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:46:43.895 08:43:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:43.895 08:43:17 -- common/autotest_common.sh@10 -- # set +x 00:46:43.895 [2024-04-17 08:43:17.125144] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:43.895 08:43:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:43.895 08:43:17 -- target/dif.sh@30 -- # for sub in "$@" 00:46:43.895 08:43:17 -- target/dif.sh@31 -- # create_subsystem 1 00:46:43.895 08:43:17 -- target/dif.sh@18 -- # local sub_id=1 00:46:43.895 08:43:17 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:46:43.895 08:43:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:43.895 08:43:17 -- common/autotest_common.sh@10 -- # set +x 00:46:43.895 bdev_null1 00:46:43.895 08:43:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:43.895 08:43:17 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:46:43.895 08:43:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:43.895 08:43:17 -- common/autotest_common.sh@10 -- # set +x 00:46:43.895 08:43:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:43.895 08:43:17 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:46:43.895 08:43:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:43.895 08:43:17 -- common/autotest_common.sh@10 -- # set +x 00:46:43.895 08:43:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:43.895 08:43:17 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:43.895 08:43:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:43.895 08:43:17 -- common/autotest_common.sh@10 -- # set +x 00:46:43.895 08:43:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:43.895 08:43:17 -- target/dif.sh@118 -- # fio /dev/fd/62 00:46:43.895 08:43:17 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:46:43.895 08:43:17 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:46:43.895 08:43:17 -- nvmf/common.sh@520 -- # config=() 00:46:43.895 08:43:17 -- nvmf/common.sh@520 -- # local subsystem config 00:46:43.895 08:43:17 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:43.895 08:43:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:46:43.895 08:43:17 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:43.895 08:43:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:46:43.895 { 00:46:43.895 "params": { 00:46:43.895 "name": "Nvme$subsystem", 00:46:43.895 "trtype": "$TEST_TRANSPORT", 00:46:43.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:43.895 "adrfam": "ipv4", 00:46:43.895 "trsvcid": "$NVMF_PORT", 00:46:43.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:43.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:43.895 "hdgst": ${hdgst:-false}, 00:46:43.895 "ddgst": ${ddgst:-false} 00:46:43.895 }, 00:46:43.895 "method": "bdev_nvme_attach_controller" 00:46:43.895 } 00:46:43.895 EOF 00:46:43.895 )") 00:46:43.895 08:43:17 -- target/dif.sh@82 -- # gen_fio_conf 00:46:43.895 08:43:17 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:46:43.895 08:43:17 -- target/dif.sh@54 -- # local file 00:46:43.895 08:43:17 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:43.895 08:43:17 -- common/autotest_common.sh@1318 -- # local sanitizers 00:46:43.895 08:43:17 -- target/dif.sh@56 -- # cat 00:46:43.895 08:43:17 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:46:43.895 08:43:17 -- common/autotest_common.sh@1320 -- # shift 00:46:43.895 08:43:17 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:46:43.895 08:43:17 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:46:43.895 08:43:17 -- nvmf/common.sh@542 -- # cat 00:46:43.895 08:43:17 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:46:43.895 08:43:17 -- common/autotest_common.sh@1324 -- # grep libasan 00:46:43.895 08:43:17 -- target/dif.sh@72 -- # (( file = 1 )) 00:46:43.895 08:43:17 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:46:43.895 08:43:17 -- target/dif.sh@72 -- # (( file <= files )) 00:46:43.895 08:43:17 -- target/dif.sh@73 -- # cat 00:46:43.895 08:43:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:46:43.895 08:43:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:46:43.895 { 00:46:43.895 "params": { 00:46:43.895 "name": "Nvme$subsystem", 00:46:43.895 "trtype": "$TEST_TRANSPORT", 00:46:43.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:43.895 "adrfam": "ipv4", 00:46:43.895 "trsvcid": "$NVMF_PORT", 00:46:43.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:43.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:43.895 "hdgst": ${hdgst:-false}, 00:46:43.895 "ddgst": ${ddgst:-false} 00:46:43.895 }, 00:46:43.895 "method": "bdev_nvme_attach_controller" 00:46:43.895 } 00:46:43.895 EOF 00:46:43.895 )") 00:46:43.895 08:43:17 -- target/dif.sh@72 -- # (( file++ )) 00:46:43.895 08:43:17 -- nvmf/common.sh@542 -- # cat 00:46:43.895 08:43:17 -- target/dif.sh@72 -- # (( file <= files )) 00:46:43.895 08:43:17 -- nvmf/common.sh@544 -- # jq . 00:46:43.895 08:43:17 -- nvmf/common.sh@545 -- # IFS=, 00:46:43.895 08:43:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:46:43.895 "params": { 00:46:43.895 "name": "Nvme0", 00:46:43.895 "trtype": "tcp", 00:46:43.895 "traddr": "10.0.0.2", 00:46:43.895 "adrfam": "ipv4", 00:46:43.895 "trsvcid": "4420", 00:46:43.895 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:43.895 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:43.895 "hdgst": false, 00:46:43.895 "ddgst": false 00:46:43.895 }, 00:46:43.895 "method": "bdev_nvme_attach_controller" 00:46:43.895 },{ 00:46:43.895 "params": { 00:46:43.895 "name": "Nvme1", 00:46:43.895 "trtype": "tcp", 00:46:43.896 "traddr": "10.0.0.2", 00:46:43.896 "adrfam": "ipv4", 00:46:43.896 "trsvcid": "4420", 00:46:43.896 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:43.896 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:43.896 "hdgst": false, 00:46:43.896 "ddgst": false 00:46:43.896 }, 00:46:43.896 "method": "bdev_nvme_attach_controller" 00:46:43.896 }' 00:46:44.155 08:43:17 -- common/autotest_common.sh@1324 -- # asan_lib= 00:46:44.155 08:43:17 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:46:44.155 08:43:17 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:46:44.155 08:43:17 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:46:44.155 08:43:17 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:46:44.155 08:43:17 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:46:44.155 08:43:17 -- common/autotest_common.sh@1324 -- # asan_lib= 00:46:44.155 08:43:17 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:46:44.155 08:43:17 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:46:44.155 08:43:17 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:44.155 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:46:44.155 ... 00:46:44.155 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:46:44.155 ... 00:46:44.155 fio-3.35 00:46:44.155 Starting 4 threads 00:46:44.724 [2024-04-17 08:43:17.922778] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:46:44.724 [2024-04-17 08:43:17.922836] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:46:50.035 00:46:50.035 filename0: (groupid=0, jobs=1): err= 0: pid=89993: Wed Apr 17 08:43:23 2024 00:46:50.035 read: IOPS=2271, BW=17.7MiB/s (18.6MB/s)(88.8MiB/5002msec) 00:46:50.035 slat (nsec): min=5981, max=83017, avg=19989.65, stdev=9705.82 00:46:50.035 clat (usec): min=1349, max=12766, avg=3422.03, stdev=425.04 00:46:50.035 lat (usec): min=1359, max=12810, avg=3442.02, stdev=425.49 00:46:50.035 clat percentiles (usec): 00:46:50.035 | 1.00th=[ 2868], 5.00th=[ 2999], 10.00th=[ 3064], 20.00th=[ 3195], 00:46:50.035 | 30.00th=[ 3294], 40.00th=[ 3359], 50.00th=[ 3425], 60.00th=[ 3458], 00:46:50.035 | 70.00th=[ 3523], 80.00th=[ 3556], 90.00th=[ 3654], 95.00th=[ 3720], 00:46:50.035 | 99.00th=[ 5276], 99.50th=[ 5604], 99.90th=[ 7701], 99.95th=[12649], 00:46:50.035 | 99.99th=[12780] 00:46:50.035 bw ( KiB/s): min=17280, max=19456, per=24.96%, avg=18161.78, stdev=588.12, samples=9 00:46:50.035 iops : min= 2160, max= 2432, avg=2270.22, stdev=73.51, samples=9 00:46:50.035 lat (msec) : 2=0.11%, 4=97.86%, 10=1.95%, 20=0.08% 00:46:50.035 cpu : usr=97.00%, sys=1.94%, ctx=211, majf=0, minf=9 00:46:50.035 IO depths : 1=10.2%, 2=25.0%, 4=50.0%, 8=14.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:50.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:50.035 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:50.035 issued rwts: total=11360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:50.035 latency : target=0, window=0, percentile=100.00%, depth=8 00:46:50.035 filename0: (groupid=0, jobs=1): err= 0: pid=89994: Wed Apr 17 08:43:23 2024 00:46:50.035 read: IOPS=2279, BW=17.8MiB/s (18.7MB/s)(89.1MiB/5001msec) 00:46:50.035 slat (nsec): min=3229, max=77036, avg=17446.74, stdev=9506.74 00:46:50.035 clat (usec): min=875, max=12924, avg=3422.45, stdev=449.96 00:46:50.035 lat (usec): min=883, max=12947, avg=3439.90, stdev=450.74 00:46:50.035 clat percentiles (usec): 00:46:50.035 | 1.00th=[ 2769], 5.00th=[ 3032], 10.00th=[ 3097], 20.00th=[ 3228], 00:46:50.035 | 30.00th=[ 3294], 40.00th=[ 3359], 50.00th=[ 3425], 60.00th=[ 3458], 00:46:50.035 | 70.00th=[ 3523], 80.00th=[ 3589], 90.00th=[ 3654], 95.00th=[ 3752], 00:46:50.035 | 99.00th=[ 5342], 99.50th=[ 5538], 99.90th=[ 7701], 99.95th=[12911], 00:46:50.035 | 99.99th=[12911] 00:46:50.035 bw ( KiB/s): min=17792, max=19328, per=25.08%, avg=18248.89, stdev=483.20, samples=9 00:46:50.035 iops : min= 2224, max= 2416, avg=2280.89, stdev=60.22, samples=9 00:46:50.035 lat (usec) : 1000=0.24% 00:46:50.035 lat (msec) : 2=0.32%, 4=97.32%, 10=2.06%, 20=0.07% 00:46:50.035 cpu : usr=96.80%, sys=2.06%, ctx=6, majf=0, minf=9 00:46:50.035 IO depths : 1=9.8%, 2=23.8%, 4=51.1%, 8=15.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:50.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:50.035 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:50.035 issued rwts: total=11401,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:50.035 latency : target=0, window=0, percentile=100.00%, depth=8 00:46:50.035 filename1: (groupid=0, jobs=1): err= 0: pid=89995: Wed Apr 17 08:43:23 2024 00:46:50.035 read: IOPS=2271, BW=17.7MiB/s (18.6MB/s)(88.8MiB/5001msec) 00:46:50.035 slat (nsec): min=6193, max=83170, avg=19258.45, stdev=9389.77 00:46:50.035 clat (usec): min=738, max=19431, avg=3425.53, stdev=456.68 00:46:50.035 lat (usec): min=750, max=19445, avg=3444.79, stdev=456.98 00:46:50.035 clat percentiles (usec): 00:46:50.035 | 1.00th=[ 2868], 5.00th=[ 2999], 10.00th=[ 3064], 20.00th=[ 3195], 00:46:50.035 | 30.00th=[ 3294], 40.00th=[ 3359], 50.00th=[ 3425], 60.00th=[ 3458], 00:46:50.035 | 70.00th=[ 3523], 80.00th=[ 3589], 90.00th=[ 3654], 95.00th=[ 3720], 00:46:50.035 | 99.00th=[ 5276], 99.50th=[ 5604], 99.90th=[ 8029], 99.95th=[12780], 00:46:50.035 | 99.99th=[12780] 00:46:50.035 bw ( KiB/s): min=17314, max=19456, per=24.96%, avg=18165.56, stdev=581.82, samples=9 00:46:50.035 iops : min= 2164, max= 2432, avg=2270.67, stdev=72.77, samples=9 00:46:50.035 lat (usec) : 750=0.01%, 1000=0.01% 00:46:50.035 lat (msec) : 2=0.18%, 4=97.69%, 10=2.04%, 20=0.07% 00:46:50.035 cpu : usr=97.12%, sys=1.86%, ctx=23, majf=0, minf=9 00:46:50.035 IO depths : 1=10.1%, 2=25.0%, 4=50.0%, 8=14.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:50.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:50.035 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:50.035 issued rwts: total=11360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:50.035 latency : target=0, window=0, percentile=100.00%, depth=8 00:46:50.035 filename1: (groupid=0, jobs=1): err= 0: pid=89996: Wed Apr 17 08:43:23 2024 00:46:50.035 read: IOPS=2274, BW=17.8MiB/s (18.6MB/s)(88.9MiB/5002msec) 00:46:50.035 slat (nsec): min=5703, max=64236, avg=11212.71, stdev=6297.36 00:46:50.035 clat (usec): min=691, max=12558, avg=3469.07, stdev=435.82 00:46:50.035 lat (usec): min=711, max=12573, avg=3480.28, stdev=435.97 00:46:50.035 clat percentiles (usec): 00:46:50.035 | 1.00th=[ 2769], 5.00th=[ 3032], 10.00th=[ 3097], 20.00th=[ 3261], 00:46:50.035 | 30.00th=[ 3359], 40.00th=[ 3425], 50.00th=[ 3458], 60.00th=[ 3523], 00:46:50.035 | 70.00th=[ 3556], 80.00th=[ 3621], 90.00th=[ 3720], 95.00th=[ 3785], 00:46:50.035 | 99.00th=[ 5342], 99.50th=[ 5604], 99.90th=[ 7701], 99.95th=[12518], 00:46:50.035 | 99.99th=[12518] 00:46:50.035 bw ( KiB/s): min=17584, max=19328, per=24.99%, avg=18186.67, stdev=515.18, samples=9 00:46:50.035 iops : min= 2198, max= 2416, avg=2273.33, stdev=64.40, samples=9 00:46:50.035 lat (usec) : 750=0.04%, 1000=0.05% 00:46:50.035 lat (msec) : 2=0.20%, 4=97.38%, 10=2.27%, 20=0.06% 00:46:50.035 cpu : usr=95.52%, sys=3.28%, ctx=5, majf=0, minf=9 00:46:50.035 IO depths : 1=4.6%, 2=19.1%, 4=55.8%, 8=20.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:50.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:50.035 complete : 0=0.0%, 4=89.7%, 8=10.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:50.035 issued rwts: total=11377,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:50.035 latency : target=0, window=0, percentile=100.00%, depth=8 00:46:50.035 00:46:50.035 Run status group 0 (all jobs): 00:46:50.035 READ: bw=71.1MiB/s (74.5MB/s), 17.7MiB/s-17.8MiB/s (18.6MB/s-18.7MB/s), io=355MiB (373MB), run=5001-5002msec 00:46:50.035 08:43:23 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:46:50.035 08:43:23 -- target/dif.sh@43 -- # local sub 00:46:50.035 08:43:23 -- target/dif.sh@45 -- # for sub in "$@" 00:46:50.035 08:43:23 -- target/dif.sh@46 -- # destroy_subsystem 0 00:46:50.035 08:43:23 -- target/dif.sh@36 -- # local sub_id=0 00:46:50.035 08:43:23 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:46:50.035 08:43:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:50.035 08:43:23 -- common/autotest_common.sh@10 -- # set +x 00:46:50.035 08:43:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:50.035 08:43:23 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:46:50.035 08:43:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:50.035 08:43:23 -- common/autotest_common.sh@10 -- # set +x 00:46:50.035 08:43:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:50.035 08:43:23 -- target/dif.sh@45 -- # for sub in "$@" 00:46:50.035 08:43:23 -- target/dif.sh@46 -- # destroy_subsystem 1 00:46:50.035 08:43:23 -- target/dif.sh@36 -- # local sub_id=1 00:46:50.035 08:43:23 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:50.035 08:43:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:50.035 08:43:23 -- common/autotest_common.sh@10 -- # set +x 00:46:50.035 08:43:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:50.035 08:43:23 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:46:50.035 08:43:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:50.035 08:43:23 -- common/autotest_common.sh@10 -- # set +x 00:46:50.035 08:43:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:50.035 00:46:50.035 real 0m25.608s 00:46:50.035 user 2m25.197s 00:46:50.035 sys 0m2.766s 00:46:50.035 08:43:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:50.035 08:43:23 -- common/autotest_common.sh@10 -- # set +x 00:46:50.035 ************************************ 00:46:50.035 END TEST fio_dif_rand_params 00:46:50.035 ************************************ 00:46:50.035 08:43:23 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:46:50.035 08:43:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:46:50.035 08:43:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:46:50.035 08:43:23 -- common/autotest_common.sh@10 -- # set +x 00:46:50.035 ************************************ 00:46:50.035 START TEST fio_dif_digest 00:46:50.035 ************************************ 00:46:50.035 08:43:23 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:46:50.035 08:43:23 -- target/dif.sh@123 -- # local NULL_DIF 00:46:50.035 08:43:23 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:46:50.035 08:43:23 -- target/dif.sh@125 -- # local hdgst ddgst 00:46:50.035 08:43:23 -- target/dif.sh@127 -- # NULL_DIF=3 00:46:50.035 08:43:23 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:46:50.035 08:43:23 -- target/dif.sh@127 -- # numjobs=3 00:46:50.035 08:43:23 -- target/dif.sh@127 -- # iodepth=3 00:46:50.035 08:43:23 -- target/dif.sh@127 -- # runtime=10 00:46:50.035 08:43:23 -- target/dif.sh@128 -- # hdgst=true 00:46:50.035 08:43:23 -- target/dif.sh@128 -- # ddgst=true 00:46:50.035 08:43:23 -- target/dif.sh@130 -- # create_subsystems 0 00:46:50.035 08:43:23 -- target/dif.sh@28 -- # local sub 00:46:50.035 08:43:23 -- target/dif.sh@30 -- # for sub in "$@" 00:46:50.035 08:43:23 -- target/dif.sh@31 -- # create_subsystem 0 00:46:50.035 08:43:23 -- target/dif.sh@18 -- # local sub_id=0 00:46:50.036 08:43:23 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:46:50.036 08:43:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:50.036 08:43:23 -- common/autotest_common.sh@10 -- # set +x 00:46:50.295 bdev_null0 00:46:50.295 08:43:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:50.295 08:43:23 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:46:50.295 08:43:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:50.295 08:43:23 -- common/autotest_common.sh@10 -- # set +x 00:46:50.295 08:43:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:50.295 08:43:23 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:46:50.295 08:43:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:50.295 08:43:23 -- common/autotest_common.sh@10 -- # set +x 00:46:50.295 08:43:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:50.295 08:43:23 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:46:50.295 08:43:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:50.295 08:43:23 -- common/autotest_common.sh@10 -- # set +x 00:46:50.295 [2024-04-17 08:43:23.399362] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:50.295 08:43:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:50.295 08:43:23 -- target/dif.sh@131 -- # fio /dev/fd/62 00:46:50.295 08:43:23 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:46:50.295 08:43:23 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:46:50.295 08:43:23 -- nvmf/common.sh@520 -- # config=() 00:46:50.295 08:43:23 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:50.295 08:43:23 -- nvmf/common.sh@520 -- # local subsystem config 00:46:50.295 08:43:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:46:50.295 08:43:23 -- target/dif.sh@82 -- # gen_fio_conf 00:46:50.295 08:43:23 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:50.295 08:43:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:46:50.295 { 00:46:50.295 "params": { 00:46:50.295 "name": "Nvme$subsystem", 00:46:50.295 "trtype": "$TEST_TRANSPORT", 00:46:50.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:50.295 "adrfam": "ipv4", 00:46:50.295 "trsvcid": "$NVMF_PORT", 00:46:50.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:50.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:50.295 "hdgst": ${hdgst:-false}, 00:46:50.295 "ddgst": ${ddgst:-false} 00:46:50.295 }, 00:46:50.295 "method": "bdev_nvme_attach_controller" 00:46:50.295 } 00:46:50.295 EOF 00:46:50.295 )") 00:46:50.295 08:43:23 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:46:50.295 08:43:23 -- target/dif.sh@54 -- # local file 00:46:50.295 08:43:23 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:50.295 08:43:23 -- target/dif.sh@56 -- # cat 00:46:50.295 08:43:23 -- common/autotest_common.sh@1318 -- # local sanitizers 00:46:50.295 08:43:23 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:46:50.295 08:43:23 -- common/autotest_common.sh@1320 -- # shift 00:46:50.295 08:43:23 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:46:50.295 08:43:23 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:46:50.295 08:43:23 -- nvmf/common.sh@542 -- # cat 00:46:50.295 08:43:23 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:46:50.295 08:43:23 -- target/dif.sh@72 -- # (( file = 1 )) 00:46:50.295 08:43:23 -- target/dif.sh@72 -- # (( file <= files )) 00:46:50.295 08:43:23 -- common/autotest_common.sh@1324 -- # grep libasan 00:46:50.295 08:43:23 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:46:50.295 08:43:23 -- nvmf/common.sh@544 -- # jq . 00:46:50.295 08:43:23 -- nvmf/common.sh@545 -- # IFS=, 00:46:50.295 08:43:23 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:46:50.295 "params": { 00:46:50.295 "name": "Nvme0", 00:46:50.295 "trtype": "tcp", 00:46:50.295 "traddr": "10.0.0.2", 00:46:50.295 "adrfam": "ipv4", 00:46:50.295 "trsvcid": "4420", 00:46:50.295 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:50.295 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:50.295 "hdgst": true, 00:46:50.295 "ddgst": true 00:46:50.295 }, 00:46:50.295 "method": "bdev_nvme_attach_controller" 00:46:50.295 }' 00:46:50.295 08:43:23 -- common/autotest_common.sh@1324 -- # asan_lib= 00:46:50.295 08:43:23 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:46:50.295 08:43:23 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:46:50.295 08:43:23 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:46:50.295 08:43:23 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:46:50.295 08:43:23 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:46:50.295 08:43:23 -- common/autotest_common.sh@1324 -- # asan_lib= 00:46:50.295 08:43:23 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:46:50.295 08:43:23 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:46:50.295 08:43:23 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:50.295 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:46:50.295 ... 00:46:50.295 fio-3.35 00:46:50.295 Starting 3 threads 00:46:50.863 [2024-04-17 08:43:23.973642] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:46:50.863 [2024-04-17 08:43:23.973698] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:47:00.845 00:47:00.845 filename0: (groupid=0, jobs=1): err= 0: pid=90102: Wed Apr 17 08:43:34 2024 00:47:00.845 read: IOPS=258, BW=32.3MiB/s (33.9MB/s)(324MiB/10044msec) 00:47:00.845 slat (nsec): min=6898, max=59851, avg=14376.66, stdev=4651.29 00:47:00.845 clat (usec): min=7844, max=52866, avg=11583.12, stdev=3887.54 00:47:00.845 lat (usec): min=7871, max=52880, avg=11597.49, stdev=3887.36 00:47:00.845 clat percentiles (usec): 00:47:00.845 | 1.00th=[ 9241], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10552], 00:47:00.845 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11207], 60.00th=[11469], 00:47:00.845 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12256], 95.00th=[12649], 00:47:00.845 | 99.00th=[15926], 99.50th=[52167], 99.90th=[52167], 99.95th=[52691], 00:47:00.845 | 99.99th=[52691] 00:47:00.845 bw ( KiB/s): min=29440, max=37120, per=37.79%, avg=33170.90, stdev=1781.23, samples=20 00:47:00.845 iops : min= 230, max= 290, avg=259.10, stdev=13.91, samples=20 00:47:00.845 lat (msec) : 10=6.67%, 20=92.44%, 50=0.04%, 100=0.85% 00:47:00.845 cpu : usr=94.95%, sys=3.85%, ctx=7, majf=0, minf=0 00:47:00.845 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:00.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.845 issued rwts: total=2594,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:00.845 latency : target=0, window=0, percentile=100.00%, depth=3 00:47:00.845 filename0: (groupid=0, jobs=1): err= 0: pid=90103: Wed Apr 17 08:43:34 2024 00:47:00.845 read: IOPS=237, BW=29.7MiB/s (31.1MB/s)(297MiB/10005msec) 00:47:00.845 slat (nsec): min=6499, max=56112, avg=13529.45, stdev=4677.20 00:47:00.845 clat (usec): min=6497, max=16664, avg=12606.75, stdev=1452.94 00:47:00.845 lat (usec): min=6513, max=16681, avg=12620.28, stdev=1453.01 00:47:00.845 clat percentiles (usec): 00:47:00.845 | 1.00th=[ 7439], 5.00th=[10159], 10.00th=[11207], 20.00th=[11863], 00:47:00.845 | 30.00th=[12125], 40.00th=[12518], 50.00th=[12780], 60.00th=[13042], 00:47:00.845 | 70.00th=[13304], 80.00th=[13698], 90.00th=[14091], 95.00th=[14484], 00:47:00.845 | 99.00th=[15270], 99.50th=[15533], 99.90th=[16057], 99.95th=[16057], 00:47:00.845 | 99.99th=[16712] 00:47:00.845 bw ( KiB/s): min=27648, max=33280, per=34.81%, avg=30554.74, stdev=1546.31, samples=19 00:47:00.845 iops : min= 216, max= 260, avg=238.63, stdev=12.05, samples=19 00:47:00.845 lat (msec) : 10=4.75%, 20=95.25% 00:47:00.845 cpu : usr=95.17%, sys=3.73%, ctx=23, majf=0, minf=0 00:47:00.845 IO depths : 1=1.8%, 2=98.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:00.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.845 issued rwts: total=2377,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:00.845 latency : target=0, window=0, percentile=100.00%, depth=3 00:47:00.845 filename0: (groupid=0, jobs=1): err= 0: pid=90104: Wed Apr 17 08:43:34 2024 00:47:00.845 read: IOPS=191, BW=23.9MiB/s (25.1MB/s)(240MiB/10002msec) 00:47:00.845 slat (nsec): min=7673, max=42727, avg=14905.82, stdev=4128.61 00:47:00.845 clat (usec): min=3740, max=26144, avg=15642.75, stdev=1746.04 00:47:00.845 lat (usec): min=3760, max=26166, avg=15657.66, stdev=1746.07 00:47:00.845 clat percentiles (usec): 00:47:00.845 | 1.00th=[ 9634], 5.00th=[12125], 10.00th=[13829], 20.00th=[15008], 00:47:00.845 | 30.00th=[15401], 40.00th=[15664], 50.00th=[15926], 60.00th=[16188], 00:47:00.845 | 70.00th=[16450], 80.00th=[16712], 90.00th=[17171], 95.00th=[17695], 00:47:00.845 | 99.00th=[18744], 99.50th=[19268], 99.90th=[24773], 99.95th=[26084], 00:47:00.845 | 99.99th=[26084] 00:47:00.845 bw ( KiB/s): min=21760, max=27648, per=27.97%, avg=24549.05, stdev=1535.75, samples=19 00:47:00.845 iops : min= 170, max= 216, avg=191.79, stdev=12.00, samples=19 00:47:00.845 lat (msec) : 4=0.05%, 10=2.09%, 20=97.39%, 50=0.47% 00:47:00.845 cpu : usr=95.62%, sys=3.27%, ctx=8, majf=0, minf=9 00:47:00.845 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:00.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.845 issued rwts: total=1916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:00.845 latency : target=0, window=0, percentile=100.00%, depth=3 00:47:00.845 00:47:00.845 Run status group 0 (all jobs): 00:47:00.845 READ: bw=85.7MiB/s (89.9MB/s), 23.9MiB/s-32.3MiB/s (25.1MB/s-33.9MB/s), io=861MiB (903MB), run=10002-10044msec 00:47:01.104 08:43:34 -- target/dif.sh@132 -- # destroy_subsystems 0 00:47:01.104 08:43:34 -- target/dif.sh@43 -- # local sub 00:47:01.104 08:43:34 -- target/dif.sh@45 -- # for sub in "$@" 00:47:01.104 08:43:34 -- target/dif.sh@46 -- # destroy_subsystem 0 00:47:01.104 08:43:34 -- target/dif.sh@36 -- # local sub_id=0 00:47:01.104 08:43:34 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:47:01.104 08:43:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:01.104 08:43:34 -- common/autotest_common.sh@10 -- # set +x 00:47:01.104 08:43:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:01.104 08:43:34 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:47:01.104 08:43:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:01.104 08:43:34 -- common/autotest_common.sh@10 -- # set +x 00:47:01.104 08:43:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:01.104 00:47:01.104 real 0m11.000s 00:47:01.104 user 0m29.279s 00:47:01.104 sys 0m1.360s 00:47:01.104 08:43:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:01.104 08:43:34 -- common/autotest_common.sh@10 -- # set +x 00:47:01.104 ************************************ 00:47:01.104 END TEST fio_dif_digest 00:47:01.104 ************************************ 00:47:01.104 08:43:34 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:47:01.104 08:43:34 -- target/dif.sh@147 -- # nvmftestfini 00:47:01.104 08:43:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:47:01.104 08:43:34 -- nvmf/common.sh@116 -- # sync 00:47:01.104 08:43:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:47:01.104 08:43:34 -- nvmf/common.sh@119 -- # set +e 00:47:01.104 08:43:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:47:01.104 08:43:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:47:01.104 rmmod nvme_tcp 00:47:01.104 rmmod nvme_fabrics 00:47:01.104 rmmod nvme_keyring 00:47:01.382 08:43:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:47:01.382 08:43:34 -- nvmf/common.sh@123 -- # set -e 00:47:01.382 08:43:34 -- nvmf/common.sh@124 -- # return 0 00:47:01.382 08:43:34 -- nvmf/common.sh@477 -- # '[' -n 89316 ']' 00:47:01.382 08:43:34 -- nvmf/common.sh@478 -- # killprocess 89316 00:47:01.382 08:43:34 -- common/autotest_common.sh@926 -- # '[' -z 89316 ']' 00:47:01.382 08:43:34 -- common/autotest_common.sh@930 -- # kill -0 89316 00:47:01.382 08:43:34 -- common/autotest_common.sh@931 -- # uname 00:47:01.382 08:43:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:47:01.382 08:43:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89316 00:47:01.382 08:43:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:47:01.382 08:43:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:47:01.382 killing process with pid 89316 00:47:01.382 08:43:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89316' 00:47:01.382 08:43:34 -- common/autotest_common.sh@945 -- # kill 89316 00:47:01.382 08:43:34 -- common/autotest_common.sh@950 -- # wait 89316 00:47:01.382 08:43:34 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:47:01.382 08:43:34 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:47:01.638 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:47:01.638 Waiting for block devices as requested 00:47:01.638 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:47:01.895 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:47:01.895 08:43:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:47:01.895 08:43:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:47:01.895 08:43:35 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:47:01.895 08:43:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:47:01.895 08:43:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:01.895 08:43:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:47:01.895 08:43:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:01.895 08:43:35 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:47:01.895 00:47:01.895 real 1m2.369s 00:47:01.895 user 4m14.439s 00:47:01.895 sys 0m10.886s 00:47:01.895 08:43:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:01.895 08:43:35 -- common/autotest_common.sh@10 -- # set +x 00:47:01.895 ************************************ 00:47:01.895 END TEST nvmf_dif 00:47:01.895 ************************************ 00:47:01.895 08:43:35 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:47:01.895 08:43:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:47:01.895 08:43:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:47:01.895 08:43:35 -- common/autotest_common.sh@10 -- # set +x 00:47:01.895 ************************************ 00:47:01.895 START TEST nvmf_abort_qd_sizes 00:47:01.895 ************************************ 00:47:01.895 08:43:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:47:01.895 * Looking for test storage... 00:47:01.895 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:47:02.152 08:43:35 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:47:02.152 08:43:35 -- nvmf/common.sh@7 -- # uname -s 00:47:02.152 08:43:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:02.152 08:43:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:02.152 08:43:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:02.152 08:43:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:02.152 08:43:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:02.152 08:43:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:02.152 08:43:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:02.152 08:43:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:02.152 08:43:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:02.152 08:43:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:02.152 08:43:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 00:47:02.152 08:43:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=1feb06bb-44aa-4a62-9197-fad024c51ba2 00:47:02.152 08:43:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:02.152 08:43:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:02.152 08:43:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:47:02.152 08:43:35 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:47:02.152 08:43:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:02.153 08:43:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:02.153 08:43:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:02.153 08:43:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:02.153 08:43:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:02.153 08:43:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:02.153 08:43:35 -- paths/export.sh@5 -- # export PATH 00:47:02.153 08:43:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:02.153 08:43:35 -- nvmf/common.sh@46 -- # : 0 00:47:02.153 08:43:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:47:02.153 08:43:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:47:02.153 08:43:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:47:02.153 08:43:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:02.153 08:43:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:02.153 08:43:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:47:02.153 08:43:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:47:02.153 08:43:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:47:02.153 08:43:35 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:47:02.153 08:43:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:47:02.153 08:43:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:02.153 08:43:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:47:02.153 08:43:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:47:02.153 08:43:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:47:02.153 08:43:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:02.153 08:43:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:47:02.153 08:43:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:02.153 08:43:35 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:47:02.153 08:43:35 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:47:02.153 08:43:35 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:47:02.153 08:43:35 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:47:02.153 08:43:35 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:47:02.153 08:43:35 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:47:02.153 08:43:35 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:02.153 08:43:35 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:47:02.153 08:43:35 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:47:02.153 08:43:35 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:47:02.153 08:43:35 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:47:02.153 08:43:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:47:02.153 08:43:35 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:47:02.153 08:43:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:02.153 08:43:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:47:02.153 08:43:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:47:02.153 08:43:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:47:02.153 08:43:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:47:02.153 08:43:35 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:47:02.153 08:43:35 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:47:02.153 Cannot find device "nvmf_tgt_br" 00:47:02.153 08:43:35 -- nvmf/common.sh@154 -- # true 00:47:02.153 08:43:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:47:02.153 Cannot find device "nvmf_tgt_br2" 00:47:02.153 08:43:35 -- nvmf/common.sh@155 -- # true 00:47:02.153 08:43:35 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:47:02.153 08:43:35 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:47:02.153 Cannot find device "nvmf_tgt_br" 00:47:02.153 08:43:35 -- nvmf/common.sh@157 -- # true 00:47:02.153 08:43:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:47:02.153 Cannot find device "nvmf_tgt_br2" 00:47:02.153 08:43:35 -- nvmf/common.sh@158 -- # true 00:47:02.153 08:43:35 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:47:02.153 08:43:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:47:02.153 08:43:35 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:02.153 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:02.153 08:43:35 -- nvmf/common.sh@161 -- # true 00:47:02.153 08:43:35 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:02.153 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:02.153 08:43:35 -- nvmf/common.sh@162 -- # true 00:47:02.153 08:43:35 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:47:02.153 08:43:35 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:47:02.153 08:43:35 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:47:02.153 08:43:35 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:47:02.153 08:43:35 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:47:02.153 08:43:35 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:47:02.153 08:43:35 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:47:02.153 08:43:35 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:47:02.153 08:43:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:47:02.153 08:43:35 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:47:02.153 08:43:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:47:02.153 08:43:35 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:47:02.153 08:43:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:47:02.153 08:43:35 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:47:02.153 08:43:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:47:02.153 08:43:35 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:47:02.153 08:43:35 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:47:02.153 08:43:35 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:47:02.153 08:43:35 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:47:02.153 08:43:35 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:47:02.153 08:43:35 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:47:02.411 08:43:35 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:47:02.411 08:43:35 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:47:02.411 08:43:35 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:47:02.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:02.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:47:02.411 00:47:02.411 --- 10.0.0.2 ping statistics --- 00:47:02.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:02.411 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:47:02.411 08:43:35 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:47:02.411 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:47:02.411 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:47:02.411 00:47:02.411 --- 10.0.0.3 ping statistics --- 00:47:02.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:02.411 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:47:02.411 08:43:35 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:47:02.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:02.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:47:02.411 00:47:02.411 --- 10.0.0.1 ping statistics --- 00:47:02.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:02.411 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:47:02.411 08:43:35 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:02.411 08:43:35 -- nvmf/common.sh@421 -- # return 0 00:47:02.411 08:43:35 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:47:02.411 08:43:35 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:47:02.668 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:47:02.926 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:47:02.926 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:47:02.926 08:43:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:02.926 08:43:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:47:02.926 08:43:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:47:02.926 08:43:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:02.926 08:43:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:47:02.926 08:43:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:47:02.926 08:43:36 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:47:02.926 08:43:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:47:02.926 08:43:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:47:02.926 08:43:36 -- common/autotest_common.sh@10 -- # set +x 00:47:02.926 08:43:36 -- nvmf/common.sh@469 -- # nvmfpid=90691 00:47:02.926 08:43:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:47:02.926 08:43:36 -- nvmf/common.sh@470 -- # waitforlisten 90691 00:47:02.926 08:43:36 -- common/autotest_common.sh@819 -- # '[' -z 90691 ']' 00:47:02.926 08:43:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:02.926 08:43:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:47:02.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:02.926 08:43:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:02.926 08:43:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:47:02.926 08:43:36 -- common/autotest_common.sh@10 -- # set +x 00:47:03.185 [2024-04-17 08:43:36.279772] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:47:03.185 [2024-04-17 08:43:36.279866] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:03.185 [2024-04-17 08:43:36.423829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:47:03.444 [2024-04-17 08:43:36.528746] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:47:03.444 [2024-04-17 08:43:36.528901] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:03.444 [2024-04-17 08:43:36.528913] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:03.444 [2024-04-17 08:43:36.528919] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:03.444 [2024-04-17 08:43:36.529137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:47:03.444 [2024-04-17 08:43:36.529403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:03.444 [2024-04-17 08:43:36.529268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:47:03.444 [2024-04-17 08:43:36.529381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:47:04.013 08:43:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:47:04.013 08:43:37 -- common/autotest_common.sh@852 -- # return 0 00:47:04.013 08:43:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:47:04.013 08:43:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:47:04.013 08:43:37 -- common/autotest_common.sh@10 -- # set +x 00:47:04.013 08:43:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:04.013 08:43:37 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:47:04.013 08:43:37 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:47:04.013 08:43:37 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:47:04.013 08:43:37 -- scripts/common.sh@311 -- # local bdf bdfs 00:47:04.013 08:43:37 -- scripts/common.sh@312 -- # local nvmes 00:47:04.013 08:43:37 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:47:04.013 08:43:37 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:47:04.013 08:43:37 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:47:04.013 08:43:37 -- scripts/common.sh@297 -- # local bdf= 00:47:04.013 08:43:37 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:47:04.013 08:43:37 -- scripts/common.sh@232 -- # local class 00:47:04.013 08:43:37 -- scripts/common.sh@233 -- # local subclass 00:47:04.013 08:43:37 -- scripts/common.sh@234 -- # local progif 00:47:04.013 08:43:37 -- scripts/common.sh@235 -- # printf %02x 1 00:47:04.013 08:43:37 -- scripts/common.sh@235 -- # class=01 00:47:04.014 08:43:37 -- scripts/common.sh@236 -- # printf %02x 8 00:47:04.014 08:43:37 -- scripts/common.sh@236 -- # subclass=08 00:47:04.014 08:43:37 -- scripts/common.sh@237 -- # printf %02x 2 00:47:04.014 08:43:37 -- scripts/common.sh@237 -- # progif=02 00:47:04.014 08:43:37 -- scripts/common.sh@239 -- # hash lspci 00:47:04.014 08:43:37 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:47:04.014 08:43:37 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:47:04.014 08:43:37 -- scripts/common.sh@244 -- # tr -d '"' 00:47:04.014 08:43:37 -- scripts/common.sh@242 -- # grep -i -- -p02 00:47:04.014 08:43:37 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:47:04.014 08:43:37 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:47:04.014 08:43:37 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:47:04.014 08:43:37 -- scripts/common.sh@15 -- # local i 00:47:04.014 08:43:37 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:47:04.014 08:43:37 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:47:04.014 08:43:37 -- scripts/common.sh@24 -- # return 0 00:47:04.014 08:43:37 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:47:04.014 08:43:37 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:47:04.014 08:43:37 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:47:04.014 08:43:37 -- scripts/common.sh@15 -- # local i 00:47:04.014 08:43:37 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:47:04.014 08:43:37 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:47:04.014 08:43:37 -- scripts/common.sh@24 -- # return 0 00:47:04.014 08:43:37 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:47:04.014 08:43:37 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:47:04.014 08:43:37 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:47:04.014 08:43:37 -- scripts/common.sh@322 -- # uname -s 00:47:04.014 08:43:37 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:47:04.014 08:43:37 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:47:04.014 08:43:37 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:47:04.014 08:43:37 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:47:04.014 08:43:37 -- scripts/common.sh@322 -- # uname -s 00:47:04.014 08:43:37 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:47:04.014 08:43:37 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:47:04.014 08:43:37 -- scripts/common.sh@327 -- # (( 2 )) 00:47:04.014 08:43:37 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:47:04.014 08:43:37 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:47:04.014 08:43:37 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:47:04.014 08:43:37 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:47:04.014 08:43:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:47:04.014 08:43:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:47:04.014 08:43:37 -- common/autotest_common.sh@10 -- # set +x 00:47:04.014 ************************************ 00:47:04.014 START TEST spdk_target_abort 00:47:04.014 ************************************ 00:47:04.014 08:43:37 -- common/autotest_common.sh@1104 -- # spdk_target 00:47:04.014 08:43:37 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:47:04.014 08:43:37 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:47:04.014 08:43:37 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:47:04.014 08:43:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:04.014 08:43:37 -- common/autotest_common.sh@10 -- # set +x 00:47:04.274 spdk_targetn1 00:47:04.274 08:43:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:04.274 08:43:37 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:47:04.274 08:43:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:04.274 08:43:37 -- common/autotest_common.sh@10 -- # set +x 00:47:04.274 [2024-04-17 08:43:37.370362] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:04.274 08:43:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:04.274 08:43:37 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:47:04.274 08:43:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:04.274 08:43:37 -- common/autotest_common.sh@10 -- # set +x 00:47:04.274 08:43:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:04.274 08:43:37 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:47:04.274 08:43:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:04.274 08:43:37 -- common/autotest_common.sh@10 -- # set +x 00:47:04.274 08:43:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:04.274 08:43:37 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:47:04.274 08:43:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:04.274 08:43:37 -- common/autotest_common.sh@10 -- # set +x 00:47:04.274 [2024-04-17 08:43:37.410470] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:47:04.274 08:43:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:04.274 08:43:37 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:47:04.274 08:43:37 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:47:04.274 08:43:37 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:47:04.274 08:43:37 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:47:04.274 08:43:37 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:47:04.274 08:43:37 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:47:04.274 08:43:37 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:47:04.274 08:43:37 -- target/abort_qd_sizes.sh@24 -- # local target r 00:47:04.274 08:43:37 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:47:04.274 08:43:37 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:04.274 08:43:37 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:47:04.274 08:43:37 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:04.274 08:43:37 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:47:04.274 08:43:37 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:04.274 08:43:37 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:47:04.274 08:43:37 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:04.274 08:43:37 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:47:04.274 08:43:37 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:04.274 08:43:37 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:47:04.274 08:43:37 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:47:04.274 08:43:37 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:47:07.596 Initializing NVMe Controllers 00:47:07.596 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:47:07.596 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:47:07.596 Initialization complete. Launching workers. 00:47:07.596 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 12560, failed: 0 00:47:07.596 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1137, failed to submit 11423 00:47:07.596 success 740, unsuccess 397, failed 0 00:47:07.596 08:43:40 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:47:07.596 08:43:40 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:47:10.884 [2024-04-17 08:43:43.840441] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904c20 is same with the state(5) to be set 00:47:10.884 [2024-04-17 08:43:43.840504] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904c20 is same with the state(5) to be set 00:47:10.884 [2024-04-17 08:43:43.840514] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904c20 is same with the state(5) to be set 00:47:10.884 [2024-04-17 08:43:43.840521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904c20 is same with the state(5) to be set 00:47:10.884 [2024-04-17 08:43:43.840529] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x904c20 is same with the state(5) to be set 00:47:10.884 Initializing NVMe Controllers 00:47:10.884 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:47:10.884 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:47:10.884 Initialization complete. Launching workers. 00:47:10.884 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 5894, failed: 0 00:47:10.884 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1197, failed to submit 4697 00:47:10.884 success 293, unsuccess 904, failed 0 00:47:10.884 08:43:43 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:47:10.884 08:43:43 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:47:14.175 Initializing NVMe Controllers 00:47:14.175 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:47:14.175 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:47:14.175 Initialization complete. Launching workers. 00:47:14.175 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 31670, failed: 0 00:47:14.175 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2717, failed to submit 28953 00:47:14.175 success 516, unsuccess 2201, failed 0 00:47:14.175 08:43:47 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:47:14.175 08:43:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:14.175 08:43:47 -- common/autotest_common.sh@10 -- # set +x 00:47:14.175 08:43:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:14.175 08:43:47 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:47:14.175 08:43:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:47:14.175 08:43:47 -- common/autotest_common.sh@10 -- # set +x 00:47:15.556 08:43:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:47:15.556 08:43:48 -- target/abort_qd_sizes.sh@62 -- # killprocess 90691 00:47:15.556 08:43:48 -- common/autotest_common.sh@926 -- # '[' -z 90691 ']' 00:47:15.556 08:43:48 -- common/autotest_common.sh@930 -- # kill -0 90691 00:47:15.556 08:43:48 -- common/autotest_common.sh@931 -- # uname 00:47:15.556 08:43:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:47:15.556 08:43:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 90691 00:47:15.556 killing process with pid 90691 00:47:15.556 08:43:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:47:15.556 08:43:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:47:15.556 08:43:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 90691' 00:47:15.556 08:43:48 -- common/autotest_common.sh@945 -- # kill 90691 00:47:15.556 08:43:48 -- common/autotest_common.sh@950 -- # wait 90691 00:47:15.556 ************************************ 00:47:15.556 END TEST spdk_target_abort 00:47:15.556 ************************************ 00:47:15.556 00:47:15.556 real 0m11.499s 00:47:15.556 user 0m46.751s 00:47:15.556 sys 0m1.444s 00:47:15.556 08:43:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:15.556 08:43:48 -- common/autotest_common.sh@10 -- # set +x 00:47:15.556 08:43:48 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:47:15.556 08:43:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:47:15.556 08:43:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:47:15.556 08:43:48 -- common/autotest_common.sh@10 -- # set +x 00:47:15.556 ************************************ 00:47:15.556 START TEST kernel_target_abort 00:47:15.556 ************************************ 00:47:15.556 08:43:48 -- common/autotest_common.sh@1104 -- # kernel_target 00:47:15.556 08:43:48 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:47:15.556 08:43:48 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:47:15.556 08:43:48 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:47:15.556 08:43:48 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:47:15.556 08:43:48 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:47:15.556 08:43:48 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:47:15.556 08:43:48 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:47:15.556 08:43:48 -- nvmf/common.sh@627 -- # local block nvme 00:47:15.556 08:43:48 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:47:15.556 08:43:48 -- nvmf/common.sh@630 -- # modprobe nvmet 00:47:15.556 08:43:48 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:47:15.556 08:43:48 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:47:16.124 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:47:16.124 Waiting for block devices as requested 00:47:16.124 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:47:16.382 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:47:16.382 08:43:49 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:47:16.382 08:43:49 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:47:16.382 08:43:49 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:47:16.382 08:43:49 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:47:16.382 08:43:49 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:47:16.382 No valid GPT data, bailing 00:47:16.382 08:43:49 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:47:16.382 08:43:49 -- scripts/common.sh@393 -- # pt= 00:47:16.382 08:43:49 -- scripts/common.sh@394 -- # return 1 00:47:16.382 08:43:49 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:47:16.382 08:43:49 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:47:16.382 08:43:49 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:47:16.382 08:43:49 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:47:16.382 08:43:49 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:47:16.382 08:43:49 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:47:16.642 No valid GPT data, bailing 00:47:16.642 08:43:49 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:47:16.642 08:43:49 -- scripts/common.sh@393 -- # pt= 00:47:16.642 08:43:49 -- scripts/common.sh@394 -- # return 1 00:47:16.642 08:43:49 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:47:16.642 08:43:49 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:47:16.642 08:43:49 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:47:16.642 08:43:49 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:47:16.642 08:43:49 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:47:16.642 08:43:49 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:47:16.642 No valid GPT data, bailing 00:47:16.642 08:43:49 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:47:16.642 08:43:49 -- scripts/common.sh@393 -- # pt= 00:47:16.642 08:43:49 -- scripts/common.sh@394 -- # return 1 00:47:16.642 08:43:49 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:47:16.642 08:43:49 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:47:16.642 08:43:49 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:47:16.642 08:43:49 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:47:16.642 08:43:49 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:47:16.642 08:43:49 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:47:16.642 No valid GPT data, bailing 00:47:16.642 08:43:49 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:47:16.642 08:43:49 -- scripts/common.sh@393 -- # pt= 00:47:16.642 08:43:49 -- scripts/common.sh@394 -- # return 1 00:47:16.642 08:43:49 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:47:16.642 08:43:49 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:47:16.642 08:43:49 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:47:16.642 08:43:49 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:47:16.642 08:43:49 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:47:16.642 08:43:49 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:47:16.642 08:43:49 -- nvmf/common.sh@654 -- # echo 1 00:47:16.642 08:43:49 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:47:16.642 08:43:49 -- nvmf/common.sh@656 -- # echo 1 00:47:16.642 08:43:49 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:47:16.642 08:43:49 -- nvmf/common.sh@663 -- # echo tcp 00:47:16.642 08:43:49 -- nvmf/common.sh@664 -- # echo 4420 00:47:16.642 08:43:49 -- nvmf/common.sh@665 -- # echo ipv4 00:47:16.642 08:43:49 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:47:16.642 08:43:49 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1feb06bb-44aa-4a62-9197-fad024c51ba2 --hostid=1feb06bb-44aa-4a62-9197-fad024c51ba2 -a 10.0.0.1 -t tcp -s 4420 00:47:16.642 00:47:16.642 Discovery Log Number of Records 2, Generation counter 2 00:47:16.642 =====Discovery Log Entry 0====== 00:47:16.642 trtype: tcp 00:47:16.642 adrfam: ipv4 00:47:16.642 subtype: current discovery subsystem 00:47:16.642 treq: not specified, sq flow control disable supported 00:47:16.642 portid: 1 00:47:16.642 trsvcid: 4420 00:47:16.642 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:47:16.642 traddr: 10.0.0.1 00:47:16.642 eflags: none 00:47:16.642 sectype: none 00:47:16.642 =====Discovery Log Entry 1====== 00:47:16.642 trtype: tcp 00:47:16.642 adrfam: ipv4 00:47:16.642 subtype: nvme subsystem 00:47:16.642 treq: not specified, sq flow control disable supported 00:47:16.642 portid: 1 00:47:16.642 trsvcid: 4420 00:47:16.642 subnqn: kernel_target 00:47:16.642 traddr: 10.0.0.1 00:47:16.642 eflags: none 00:47:16.642 sectype: none 00:47:16.642 08:43:49 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:47:16.642 08:43:49 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:47:16.642 08:43:49 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:47:16.642 08:43:49 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:47:16.642 08:43:49 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:47:16.642 08:43:49 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:47:16.642 08:43:49 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:47:16.642 08:43:49 -- target/abort_qd_sizes.sh@24 -- # local target r 00:47:16.642 08:43:49 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:47:16.642 08:43:49 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:16.642 08:43:49 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:47:16.642 08:43:49 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:16.642 08:43:49 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:47:16.642 08:43:49 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:16.642 08:43:49 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:47:16.643 08:43:49 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:16.643 08:43:49 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:47:16.643 08:43:49 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:16.643 08:43:49 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:47:16.643 08:43:49 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:47:16.643 08:43:49 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:47:19.930 Initializing NVMe Controllers 00:47:19.930 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:47:19.930 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:47:19.930 Initialization complete. Launching workers. 00:47:19.930 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 36432, failed: 0 00:47:19.930 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 36432, failed to submit 0 00:47:19.930 success 0, unsuccess 36432, failed 0 00:47:19.930 08:43:53 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:47:19.930 08:43:53 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:47:23.214 Initializing NVMe Controllers 00:47:23.214 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:47:23.214 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:47:23.214 Initialization complete. Launching workers. 00:47:23.214 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 85024, failed: 0 00:47:23.214 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 39414, failed to submit 45610 00:47:23.214 success 0, unsuccess 39414, failed 0 00:47:23.214 08:43:56 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:47:23.214 08:43:56 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:47:26.498 Initializing NVMe Controllers 00:47:26.498 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:47:26.498 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:47:26.498 Initialization complete. Launching workers. 00:47:26.498 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 82101, failed: 0 00:47:26.498 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 22194, failed to submit 59907 00:47:26.498 success 0, unsuccess 22194, failed 0 00:47:26.498 08:43:59 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:47:26.498 08:43:59 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:47:26.498 08:43:59 -- nvmf/common.sh@677 -- # echo 0 00:47:26.498 08:43:59 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:47:26.498 08:43:59 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:47:26.498 08:43:59 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:47:26.498 08:43:59 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:47:26.498 08:43:59 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:47:26.498 08:43:59 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:47:26.498 00:47:26.498 real 0m10.708s 00:47:26.498 user 0m6.332s 00:47:26.498 sys 0m1.999s 00:47:26.498 08:43:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:26.498 08:43:59 -- common/autotest_common.sh@10 -- # set +x 00:47:26.498 ************************************ 00:47:26.498 END TEST kernel_target_abort 00:47:26.498 ************************************ 00:47:26.498 08:43:59 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:47:26.498 08:43:59 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:47:26.498 08:43:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:47:26.498 08:43:59 -- nvmf/common.sh@116 -- # sync 00:47:26.498 08:43:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:47:26.498 08:43:59 -- nvmf/common.sh@119 -- # set +e 00:47:26.498 08:43:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:47:26.498 08:43:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:47:26.498 rmmod nvme_tcp 00:47:26.498 rmmod nvme_fabrics 00:47:26.498 rmmod nvme_keyring 00:47:26.498 08:43:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:47:26.498 08:43:59 -- nvmf/common.sh@123 -- # set -e 00:47:26.498 08:43:59 -- nvmf/common.sh@124 -- # return 0 00:47:26.498 08:43:59 -- nvmf/common.sh@477 -- # '[' -n 90691 ']' 00:47:26.498 08:43:59 -- nvmf/common.sh@478 -- # killprocess 90691 00:47:26.498 08:43:59 -- common/autotest_common.sh@926 -- # '[' -z 90691 ']' 00:47:26.498 08:43:59 -- common/autotest_common.sh@930 -- # kill -0 90691 00:47:26.498 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (90691) - No such process 00:47:26.498 Process with pid 90691 is not found 00:47:26.498 08:43:59 -- common/autotest_common.sh@953 -- # echo 'Process with pid 90691 is not found' 00:47:26.498 08:43:59 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:47:26.498 08:43:59 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:47:26.756 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:47:26.756 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:47:27.014 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:47:27.014 08:44:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:47:27.014 08:44:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:47:27.014 08:44:00 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:47:27.014 08:44:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:47:27.014 08:44:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:27.014 08:44:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:47:27.014 08:44:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:27.014 08:44:00 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:47:27.014 00:47:27.014 real 0m24.984s 00:47:27.014 user 0m54.082s 00:47:27.014 sys 0m4.493s 00:47:27.014 08:44:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:27.014 08:44:00 -- common/autotest_common.sh@10 -- # set +x 00:47:27.014 ************************************ 00:47:27.014 END TEST nvmf_abort_qd_sizes 00:47:27.014 ************************************ 00:47:27.014 08:44:00 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:47:27.014 08:44:00 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:47:27.014 08:44:00 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:47:27.014 08:44:00 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:47:27.014 08:44:00 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:47:27.014 08:44:00 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:47:27.014 08:44:00 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:47:27.014 08:44:00 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:47:27.014 08:44:00 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:47:27.014 08:44:00 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:47:27.014 08:44:00 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:47:27.014 08:44:00 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:47:27.014 08:44:00 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:47:27.014 08:44:00 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:47:27.014 08:44:00 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:47:27.014 08:44:00 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:47:27.014 08:44:00 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:47:27.014 08:44:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:47:27.014 08:44:00 -- common/autotest_common.sh@10 -- # set +x 00:47:27.014 08:44:00 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:47:27.014 08:44:00 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:47:27.014 08:44:00 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:47:27.014 08:44:00 -- common/autotest_common.sh@10 -- # set +x 00:47:28.921 INFO: APP EXITING 00:47:28.921 INFO: killing all VMs 00:47:28.921 INFO: killing vhost app 00:47:28.921 INFO: EXIT DONE 00:47:29.859 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:47:29.859 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:47:29.859 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:47:30.797 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:47:30.797 Cleaning 00:47:30.797 Removing: /var/run/dpdk/spdk0/config 00:47:30.797 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:47:30.797 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:47:30.797 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:47:30.798 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:47:30.798 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:47:30.798 Removing: /var/run/dpdk/spdk0/hugepage_info 00:47:30.798 Removing: /var/run/dpdk/spdk1/config 00:47:30.798 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:47:30.798 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:47:30.798 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:47:30.798 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:47:30.798 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:47:30.798 Removing: /var/run/dpdk/spdk1/hugepage_info 00:47:30.798 Removing: /var/run/dpdk/spdk2/config 00:47:30.798 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:47:30.798 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:47:30.798 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:47:30.798 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:47:30.798 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:47:30.798 Removing: /var/run/dpdk/spdk2/hugepage_info 00:47:30.798 Removing: /var/run/dpdk/spdk3/config 00:47:30.798 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:47:30.798 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:47:30.798 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:47:30.798 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:47:30.798 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:47:30.798 Removing: /var/run/dpdk/spdk3/hugepage_info 00:47:30.798 Removing: /var/run/dpdk/spdk4/config 00:47:30.798 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:47:30.798 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:47:30.798 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:47:30.798 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:47:30.798 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:47:30.798 Removing: /var/run/dpdk/spdk4/hugepage_info 00:47:30.798 Removing: /dev/shm/nvmf_trace.0 00:47:30.798 Removing: /dev/shm/spdk_tgt_trace.pid55797 00:47:30.798 Removing: /var/run/dpdk/spdk0 00:47:30.798 Removing: /var/run/dpdk/spdk1 00:47:30.798 Removing: /var/run/dpdk/spdk2 00:47:30.798 Removing: /var/run/dpdk/spdk3 00:47:30.798 Removing: /var/run/dpdk/spdk4 00:47:30.798 Removing: /var/run/dpdk/spdk_pid55653 00:47:30.798 Removing: /var/run/dpdk/spdk_pid55797 00:47:30.798 Removing: /var/run/dpdk/spdk_pid56097 00:47:31.065 Removing: /var/run/dpdk/spdk_pid56365 00:47:31.065 Removing: /var/run/dpdk/spdk_pid56534 00:47:31.065 Removing: /var/run/dpdk/spdk_pid56609 00:47:31.065 Removing: /var/run/dpdk/spdk_pid56696 00:47:31.065 Removing: /var/run/dpdk/spdk_pid56784 00:47:31.065 Removing: /var/run/dpdk/spdk_pid56823 00:47:31.065 Removing: /var/run/dpdk/spdk_pid56858 00:47:31.065 Removing: /var/run/dpdk/spdk_pid56919 00:47:31.065 Removing: /var/run/dpdk/spdk_pid57047 00:47:31.065 Removing: /var/run/dpdk/spdk_pid57660 00:47:31.065 Removing: /var/run/dpdk/spdk_pid57724 00:47:31.065 Removing: /var/run/dpdk/spdk_pid57792 00:47:31.065 Removing: /var/run/dpdk/spdk_pid57816 00:47:31.065 Removing: /var/run/dpdk/spdk_pid57890 00:47:31.065 Removing: /var/run/dpdk/spdk_pid57917 00:47:31.065 Removing: /var/run/dpdk/spdk_pid57991 00:47:31.065 Removing: /var/run/dpdk/spdk_pid58019 00:47:31.065 Removing: /var/run/dpdk/spdk_pid58070 00:47:31.065 Removing: /var/run/dpdk/spdk_pid58099 00:47:31.065 Removing: /var/run/dpdk/spdk_pid58146 00:47:31.065 Removing: /var/run/dpdk/spdk_pid58176 00:47:31.065 Removing: /var/run/dpdk/spdk_pid58322 00:47:31.065 Removing: /var/run/dpdk/spdk_pid58352 00:47:31.065 Removing: /var/run/dpdk/spdk_pid58431 00:47:31.065 Removing: /var/run/dpdk/spdk_pid58495 00:47:31.065 Removing: /var/run/dpdk/spdk_pid58525 00:47:31.065 Removing: /var/run/dpdk/spdk_pid58583 00:47:31.065 Removing: /var/run/dpdk/spdk_pid58603 00:47:31.066 Removing: /var/run/dpdk/spdk_pid58632 00:47:31.066 Removing: /var/run/dpdk/spdk_pid58657 00:47:31.066 Removing: /var/run/dpdk/spdk_pid58686 00:47:31.066 Removing: /var/run/dpdk/spdk_pid58711 00:47:31.066 Removing: /var/run/dpdk/spdk_pid58740 00:47:31.066 Removing: /var/run/dpdk/spdk_pid58764 00:47:31.066 Removing: /var/run/dpdk/spdk_pid58794 00:47:31.066 Removing: /var/run/dpdk/spdk_pid58814 00:47:31.066 Removing: /var/run/dpdk/spdk_pid58848 00:47:31.066 Removing: /var/run/dpdk/spdk_pid58868 00:47:31.066 Removing: /var/run/dpdk/spdk_pid58902 00:47:31.066 Removing: /var/run/dpdk/spdk_pid58922 00:47:31.066 Removing: /var/run/dpdk/spdk_pid58956 00:47:31.066 Removing: /var/run/dpdk/spdk_pid58976 00:47:31.066 Removing: /var/run/dpdk/spdk_pid59006 00:47:31.066 Removing: /var/run/dpdk/spdk_pid59030 00:47:31.066 Removing: /var/run/dpdk/spdk_pid59059 00:47:31.066 Removing: /var/run/dpdk/spdk_pid59084 00:47:31.066 Removing: /var/run/dpdk/spdk_pid59113 00:47:31.066 Removing: /var/run/dpdk/spdk_pid59138 00:47:31.066 Removing: /var/run/dpdk/spdk_pid59167 00:47:31.066 Removing: /var/run/dpdk/spdk_pid59187 00:47:31.066 Removing: /var/run/dpdk/spdk_pid59221 00:47:31.066 Removing: /var/run/dpdk/spdk_pid59243 00:47:31.066 Removing: /var/run/dpdk/spdk_pid59277 00:47:31.066 Removing: /var/run/dpdk/spdk_pid59297 00:47:31.066 Removing: /var/run/dpdk/spdk_pid59331 00:47:31.066 Removing: /var/run/dpdk/spdk_pid59351 00:47:31.066 Removing: /var/run/dpdk/spdk_pid59384 00:47:31.066 Removing: /var/run/dpdk/spdk_pid59399 00:47:31.066 Removing: /var/run/dpdk/spdk_pid59434 00:47:31.066 Removing: /var/run/dpdk/spdk_pid59456 00:47:31.066 Removing: /var/run/dpdk/spdk_pid59494 00:47:31.066 Removing: /var/run/dpdk/spdk_pid59516 00:47:31.066 Removing: /var/run/dpdk/spdk_pid59554 00:47:31.066 Removing: /var/run/dpdk/spdk_pid59574 00:47:31.331 Removing: /var/run/dpdk/spdk_pid59608 00:47:31.331 Removing: /var/run/dpdk/spdk_pid59631 00:47:31.331 Removing: /var/run/dpdk/spdk_pid59663 00:47:31.331 Removing: /var/run/dpdk/spdk_pid59732 00:47:31.331 Removing: /var/run/dpdk/spdk_pid59837 00:47:31.331 Removing: /var/run/dpdk/spdk_pid60250 00:47:31.331 Removing: /var/run/dpdk/spdk_pid66972 00:47:31.331 Removing: /var/run/dpdk/spdk_pid67308 00:47:31.331 Removing: /var/run/dpdk/spdk_pid68477 00:47:31.331 Removing: /var/run/dpdk/spdk_pid68866 00:47:31.331 Removing: /var/run/dpdk/spdk_pid69091 00:47:31.331 Removing: /var/run/dpdk/spdk_pid69141 00:47:31.331 Removing: /var/run/dpdk/spdk_pid69396 00:47:31.331 Removing: /var/run/dpdk/spdk_pid69398 00:47:31.331 Removing: /var/run/dpdk/spdk_pid69456 00:47:31.331 Removing: /var/run/dpdk/spdk_pid69520 00:47:31.331 Removing: /var/run/dpdk/spdk_pid69575 00:47:31.331 Removing: /var/run/dpdk/spdk_pid69613 00:47:31.331 Removing: /var/run/dpdk/spdk_pid69619 00:47:31.331 Removing: /var/run/dpdk/spdk_pid69647 00:47:31.331 Removing: /var/run/dpdk/spdk_pid69684 00:47:31.331 Removing: /var/run/dpdk/spdk_pid69688 00:47:31.331 Removing: /var/run/dpdk/spdk_pid69750 00:47:31.331 Removing: /var/run/dpdk/spdk_pid69808 00:47:31.331 Removing: /var/run/dpdk/spdk_pid69864 00:47:31.331 Removing: /var/run/dpdk/spdk_pid69907 00:47:31.331 Removing: /var/run/dpdk/spdk_pid69914 00:47:31.331 Removing: /var/run/dpdk/spdk_pid69934 00:47:31.331 Removing: /var/run/dpdk/spdk_pid70219 00:47:31.331 Removing: /var/run/dpdk/spdk_pid70370 00:47:31.331 Removing: /var/run/dpdk/spdk_pid70625 00:47:31.331 Removing: /var/run/dpdk/spdk_pid70675 00:47:31.331 Removing: /var/run/dpdk/spdk_pid71043 00:47:31.331 Removing: /var/run/dpdk/spdk_pid71567 00:47:31.331 Removing: /var/run/dpdk/spdk_pid71989 00:47:31.331 Removing: /var/run/dpdk/spdk_pid72937 00:47:31.331 Removing: /var/run/dpdk/spdk_pid73914 00:47:31.331 Removing: /var/run/dpdk/spdk_pid74031 00:47:31.332 Removing: /var/run/dpdk/spdk_pid74093 00:47:31.332 Removing: /var/run/dpdk/spdk_pid75543 00:47:31.332 Removing: /var/run/dpdk/spdk_pid75775 00:47:31.332 Removing: /var/run/dpdk/spdk_pid76219 00:47:31.332 Removing: /var/run/dpdk/spdk_pid76328 00:47:31.332 Removing: /var/run/dpdk/spdk_pid76475 00:47:31.332 Removing: /var/run/dpdk/spdk_pid76515 00:47:31.332 Removing: /var/run/dpdk/spdk_pid76561 00:47:31.332 Removing: /var/run/dpdk/spdk_pid76606 00:47:31.332 Removing: /var/run/dpdk/spdk_pid76772 00:47:31.332 Removing: /var/run/dpdk/spdk_pid76919 00:47:31.332 Removing: /var/run/dpdk/spdk_pid77181 00:47:31.332 Removing: /var/run/dpdk/spdk_pid77304 00:47:31.332 Removing: /var/run/dpdk/spdk_pid77719 00:47:31.332 Removing: /var/run/dpdk/spdk_pid78100 00:47:31.332 Removing: /var/run/dpdk/spdk_pid78102 00:47:31.332 Removing: /var/run/dpdk/spdk_pid80336 00:47:31.332 Removing: /var/run/dpdk/spdk_pid80641 00:47:31.332 Removing: /var/run/dpdk/spdk_pid81136 00:47:31.332 Removing: /var/run/dpdk/spdk_pid81138 00:47:31.332 Removing: /var/run/dpdk/spdk_pid81476 00:47:31.332 Removing: /var/run/dpdk/spdk_pid81496 00:47:31.332 Removing: /var/run/dpdk/spdk_pid81510 00:47:31.332 Removing: /var/run/dpdk/spdk_pid81536 00:47:31.332 Removing: /var/run/dpdk/spdk_pid81550 00:47:31.332 Removing: /var/run/dpdk/spdk_pid81690 00:47:31.332 Removing: /var/run/dpdk/spdk_pid81692 00:47:31.332 Removing: /var/run/dpdk/spdk_pid81800 00:47:31.332 Removing: /var/run/dpdk/spdk_pid81802 00:47:31.590 Removing: /var/run/dpdk/spdk_pid81914 00:47:31.590 Removing: /var/run/dpdk/spdk_pid81918 00:47:31.590 Removing: /var/run/dpdk/spdk_pid82329 00:47:31.590 Removing: /var/run/dpdk/spdk_pid82382 00:47:31.590 Removing: /var/run/dpdk/spdk_pid82457 00:47:31.590 Removing: /var/run/dpdk/spdk_pid82511 00:47:31.590 Removing: /var/run/dpdk/spdk_pid82862 00:47:31.590 Removing: /var/run/dpdk/spdk_pid83113 00:47:31.590 Removing: /var/run/dpdk/spdk_pid83602 00:47:31.590 Removing: /var/run/dpdk/spdk_pid84151 00:47:31.590 Removing: /var/run/dpdk/spdk_pid84614 00:47:31.590 Removing: /var/run/dpdk/spdk_pid84704 00:47:31.590 Removing: /var/run/dpdk/spdk_pid84789 00:47:31.590 Removing: /var/run/dpdk/spdk_pid84878 00:47:31.590 Removing: /var/run/dpdk/spdk_pid85040 00:47:31.590 Removing: /var/run/dpdk/spdk_pid85130 00:47:31.590 Removing: /var/run/dpdk/spdk_pid85210 00:47:31.590 Removing: /var/run/dpdk/spdk_pid85294 00:47:31.590 Removing: /var/run/dpdk/spdk_pid85644 00:47:31.590 Removing: /var/run/dpdk/spdk_pid86336 00:47:31.590 Removing: /var/run/dpdk/spdk_pid87681 00:47:31.590 Removing: /var/run/dpdk/spdk_pid87885 00:47:31.590 Removing: /var/run/dpdk/spdk_pid88172 00:47:31.590 Removing: /var/run/dpdk/spdk_pid88470 00:47:31.590 Removing: /var/run/dpdk/spdk_pid89026 00:47:31.590 Removing: /var/run/dpdk/spdk_pid89031 00:47:31.590 Removing: /var/run/dpdk/spdk_pid89391 00:47:31.590 Removing: /var/run/dpdk/spdk_pid89550 00:47:31.590 Removing: /var/run/dpdk/spdk_pid89713 00:47:31.590 Removing: /var/run/dpdk/spdk_pid89811 00:47:31.590 Removing: /var/run/dpdk/spdk_pid89983 00:47:31.590 Removing: /var/run/dpdk/spdk_pid90092 00:47:31.590 Removing: /var/run/dpdk/spdk_pid90760 00:47:31.590 Removing: /var/run/dpdk/spdk_pid90795 00:47:31.590 Removing: /var/run/dpdk/spdk_pid90829 00:47:31.590 Removing: /var/run/dpdk/spdk_pid91084 00:47:31.590 Removing: /var/run/dpdk/spdk_pid91119 00:47:31.590 Removing: /var/run/dpdk/spdk_pid91154 00:47:31.590 Clean 00:47:31.590 killing process with pid 49815 00:47:31.850 killing process with pid 49818 00:47:31.850 08:44:04 -- common/autotest_common.sh@1436 -- # return 0 00:47:31.850 08:44:04 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:47:31.850 08:44:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:47:31.850 08:44:04 -- common/autotest_common.sh@10 -- # set +x 00:47:31.850 08:44:05 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:47:31.850 08:44:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:47:31.850 08:44:05 -- common/autotest_common.sh@10 -- # set +x 00:47:31.850 08:44:05 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:47:31.850 08:44:05 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:47:31.850 08:44:05 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:47:31.850 08:44:05 -- spdk/autotest.sh@394 -- # hash lcov 00:47:31.850 08:44:05 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:47:31.850 08:44:05 -- spdk/autotest.sh@396 -- # hostname 00:47:31.850 08:44:05 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1705279005-2131 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:47:32.111 geninfo: WARNING: invalid characters removed from testname! 00:47:58.659 08:44:28 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:47:58.659 08:44:31 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:48:00.055 08:44:33 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:48:02.591 08:44:35 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:48:04.499 08:44:37 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:48:06.407 08:44:39 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:48:08.941 08:44:41 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:48:08.941 08:44:41 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:48:08.941 08:44:41 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:48:08.941 08:44:41 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:48:08.941 08:44:41 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:48:08.941 08:44:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:08.941 08:44:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:08.941 08:44:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:08.941 08:44:41 -- paths/export.sh@5 -- $ export PATH 00:48:08.941 08:44:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:08.941 08:44:41 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:48:08.941 08:44:41 -- common/autobuild_common.sh@435 -- $ date +%s 00:48:08.941 08:44:41 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713343481.XXXXXX 00:48:08.941 08:44:41 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713343481.6f865X 00:48:08.941 08:44:41 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:48:08.941 08:44:41 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:48:08.941 08:44:41 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:48:08.941 08:44:41 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:48:08.941 08:44:41 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:48:08.941 08:44:41 -- common/autobuild_common.sh@451 -- $ get_config_params 00:48:08.941 08:44:41 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:48:08.941 08:44:41 -- common/autotest_common.sh@10 -- $ set +x 00:48:08.941 08:44:41 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang' 00:48:08.941 08:44:41 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:48:08.941 08:44:41 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:48:08.941 08:44:41 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:48:08.941 08:44:41 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:48:08.941 08:44:41 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:48:08.941 08:44:41 -- spdk/autopackage.sh@19 -- $ timing_finish 00:48:08.941 08:44:41 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:48:08.941 08:44:41 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:48:08.941 08:44:41 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:48:08.941 08:44:41 -- spdk/autopackage.sh@20 -- $ exit 0 00:48:08.941 + [[ -n 5294 ]] 00:48:08.941 + sudo kill 5294 00:48:08.951 [Pipeline] } 00:48:08.970 [Pipeline] // timeout 00:48:08.975 [Pipeline] } 00:48:08.993 [Pipeline] // stage 00:48:08.998 [Pipeline] } 00:48:09.016 [Pipeline] // catchError 00:48:09.025 [Pipeline] stage 00:48:09.027 [Pipeline] { (Stop VM) 00:48:09.041 [Pipeline] sh 00:48:09.324 + vagrant halt 00:48:11.858 ==> default: Halting domain... 00:48:19.991 [Pipeline] sh 00:48:20.273 + vagrant destroy -f 00:48:22.813 ==> default: Removing domain... 00:48:23.091 [Pipeline] sh 00:48:23.376 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/output 00:48:23.386 [Pipeline] } 00:48:23.404 [Pipeline] // stage 00:48:23.410 [Pipeline] } 00:48:23.429 [Pipeline] // dir 00:48:23.436 [Pipeline] } 00:48:23.454 [Pipeline] // wrap 00:48:23.461 [Pipeline] } 00:48:23.476 [Pipeline] // catchError 00:48:23.486 [Pipeline] stage 00:48:23.488 [Pipeline] { (Epilogue) 00:48:23.504 [Pipeline] sh 00:48:23.790 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:48:29.076 [Pipeline] catchError 00:48:29.078 [Pipeline] { 00:48:29.093 [Pipeline] sh 00:48:29.377 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:48:29.378 Artifacts sizes are good 00:48:29.387 [Pipeline] } 00:48:29.403 [Pipeline] // catchError 00:48:29.417 [Pipeline] archiveArtifacts 00:48:29.424 Archiving artifacts 00:48:29.562 [Pipeline] cleanWs 00:48:29.574 [WS-CLEANUP] Deleting project workspace... 00:48:29.574 [WS-CLEANUP] Deferred wipeout is used... 00:48:29.581 [WS-CLEANUP] done 00:48:29.583 [Pipeline] } 00:48:29.599 [Pipeline] // stage 00:48:29.607 [Pipeline] } 00:48:29.620 [Pipeline] // node 00:48:29.625 [Pipeline] End of Pipeline 00:48:29.656 Finished: SUCCESS